Modern Security Practices for Web Developers

When you’re building a web application, your servers become responsible for managing other people’s data. Thus it falls on you, the app developer, to implement security measures to prevent data breaches. This is especially true if your application gets popular. The more people, the more valuable the data, the more costly the data breaches can be.

Just recently, we had the Equifax breach on 143 million americans, the Alteryx breach of Experian data on 235 million consumers, the Yahoo breach compromising 3 billion accounts, and so on. It seems this has become a routine occurrence. That’s what’s motivated this article.

Web Security in 2018

In 2018, there are plenty of good guides and resources on web security. This guide is different. It describes new techniques that have become possible on the Web, which eliminate large swaths of attacks. It lists principles that you can apply in your own apps to make them progressively more secure.

As developers of a large, open-source platform for running social networks and payment systems, we have had to confront security issues head-on. We know that popular open-source projects can be especially vulnerable since anyone can see and analyze the back-end code. This motivated us even more to follow Kerchkoff’s principle, that the security of a system should rest on its design, and not its obscurity. So let’s dive in.

The Principles

  1. Do not store sensitive data unencrypted at rest
  2. Do not store keys to encrypted data in the same place
  3. Require cryptographic signatures for assertions
  4. More proof is better than less proof
  5. Use blockchains for data consistency


Do not store sensitive data unencrypted at rest

The easiest way to prevent sensitive data from being stolen from the server is not storing it there. Often, it can be stored on the user’s client. At the very least, this limits the data breaches to only the compromised clients. These days, the makers of operating systems (especially mobile operating systems) have caused user data to be encrypted by default, and the user unlocks their phone or laptop using a biometric id or passcode. Thus, to get the data, the attacker would have to access the user’s unlocked device.

Of course, sometimes you need this data on the server side. People may want to access and use their own information when logged in. External APIs may want to use an oauth token or even a username and password. If you’re going to store sensitive data – whether on the client or the server – consider encrypting it.

It used to be that data stored on the client could not be reliably encrypted. But in the last few years, the Web Crypto API has been finalized and is now supported by all major browsers including iOS Safari and Android Chrome. It gives you access to all the cryptographic tools you need to secure your data on the client, including random number generation, deriving and handling private keys, and the browser can now make sure the keys aren’t exported out of the client side website context.


Do not store keys to encrypted data in the same place

For really sensitive data on the server, encrypt it when storing it in the database. Many database vendors actually have transparent solutions for this. To decrypt the data, you will need:

  1. The app’s private key
  2. The private key of the user who encrypted the data

(See the “more proof is better than less” principle, below.)

None of these keys should be stored in the same place as the database, so the hacker would have to compromise more places in order to get the decrypted data. You can even split the keys up and store parts in different places. In any case, once you obtain the keys, do not save them or export them anywhere, but store them only in transient operating memory.

The private key of the user who encrypted the data can itself be encrypted and unlocked by using a valid user client device. For each device, another copy of the key would have be stored, encrypted with that device’s private key. Each device is identified to the domain by its corresponding public key.

It’s a little-known fact that most modern browsers allow you to save Web Crypto keys on the client using the new IndexedDB API. However, once again, you don’t want to store these keys without encrypting them first. Otherwise, anyone with access to the browser’s database will be able to see the keys and steal them, allowing them to take actions on the user’s behalf.

The typical approach is to have a master key which is used to encrypt various things on behalf of the owner. This master key is then stored encrypted by one or more access keys, which are derived from a user’s passcode or biometrics (such as like their fingerprint or Face ID). A user can have one access key for each of their fingers, for instance. This way, if the user adds another password or finger, they don’t have to re-encrypt all the things with it as well.

The access keys should not be stored on the device! They are derived every time from the user’s password or biometrics, and used to decrypt the master key every time it’s needed. The Web Crypto API can prevent the export of these keys using Javascript, and if your code is running inside a trusted browser or mobile authentication session then other apps can’t get at it.

If other Javascript is running in the website, it might misuse the keys, so it’s nice to only expose a few functions like MyApp.sign(data) . Load the script defining these functions via the first script tag in the HTML document and use the new Object.freeze() method (from ECMAScript 5 which is supported in all modern browsers) to prevent other Javascript code from replacing the functions.

Ideally, the user’s passcode and biometrics should be entered in a trusted sandbox mode that other code running on the computer (such as other Javascript, or screencasting software) can’t access. Sadly, there is still no standard way to isolate such a sandbox on the Web. Fingerprints aren’t going to be intercepted by keyloggers and screencasting, but passwords can be. In the future, such a mode may be developed by Operating System vendors, and users would recognize it because it would display some photo or phrase that only the user would know (because they entered it when setting up their device). This is the weakest link with the Web (and most operating systems) in 2018: anyone can spoof a password prompt on iOS, MacOS, Windows, and the Web. Biometrics are better. For now, you can just simulate it with an iframe running on a domain of the user’s choice, which the user trusts. When the user puts their keyboard focus in the password area, they would see their secret phrase in the iframe. (There used to be a standard called xAuth that would have allowed the enclosing site to find out this domain, but it has fallen by the wayside.)

Long story short, it looks like this:

  1. user inputs passcode or biometrics (in a trusted sandbox)
  2. derive access keys from that (in the same trusted sandbox)
  3. use them to decrypt master key for that domain stored by the user agent on the user’s device
  4. use master key to decrypt information sent to the user, and sign information sent by the user


Require cryptographic signatures for assertions

This is all well and good, but what if the server doesn’t send the right Javascript? On the Web, we constantly have to trust the server to send the resources. Content Delivery Networks that serve the same files for thousands and millions of sites are juicy targets for hackers to modify files across many sites if they can.

It would be nice if there was a cryptographic HMAC of the file we could obtain in a side-channel, like from someone (like ourselves) who downloaded the resource from that URL and reviewed it. Luckily, the Web now allows any site to easily add Subresource Integrity checks to do just that.

However, the top-level resource that you request via a URL can still be changed, even if you load it via https. Perhaps the server was compromised and hacked. Perhaps a certificate authority was compromised and now someone used DNS rebinding to serve you a fake site. Having browsers be able to verify top-level resources against an HMAC would be really useful, but the current breed of browsers doesn’t support it. This is the second weakest link on the Web. Perhaps one day it will be fixed with IPFS and content-addressable URLs. For now, at Qbix we plan to release a browser extension that parses the top-level location URL for some appended hash via special characters (##hmac=...) and, if present, verifies it or rejects loading the document.

Responding to a request with a file is an example of an assertion: “this is the resource at this URL.” The HMAC acts as a cryptographic signature of this assertion. But there are many other assertions you can sign, and the general principle is to require signatures of an assertion.

If the entity checking the signatures of the assertion is the same entity issuing the assertion, then it can just keep a private key around. For example, it’s useful to sign session ids that your servers generates (use an HMAC, and include that signature as part of the session id). This way, your network can right away reject bogus session ids (such as “blabla”) without hitting the database or doing any I/O at all. Even the computers acting as gateways into your network can keep the private key and reject all requests that don’t contain a properly signed session key.

This enables another aspect of security: uptime. Your network can handle many more requests if unauthorized requests are stopped early on without putting a strain on expensive resources. Users without a session may be able to request static assets from a CDN, but your app servers won’t be weighed down. And within authorized sessions, you can implement quotas for using resources, throttling users and preventing them from abusing the network, or even charging them for their usage. All of this is possible simply from requiring signatures for assertions.

If the entity checking the assertion is a not necessarily going to be the same one that issued the assertion, then you can use public key cryptography: the assertions should be signed by the entity’s private key, and anyone with the public key can verify it. For more efficiency, or for off-the-record messaging, you may want to use a hybrid cryptosystemwhere you bootstrap with private keys but generate symmetric keys that can be shared per-session or per-message.


More proof is better than less proof

It’s a simple principle: you don’t become less secure by requiring more proof (of identity, permissions, etc.) before granting a request.

Since the early days of the web, cookies were used to transmit session ids. Until the advent of localStorage, that’s as far as people could really go. If you sent a request with the correct session id, your request was executed in the context of the user who authenticated with that session. Thus, lots of attacks were developed, including session fixation attacks where the attacker would gain access to the user’s session, and be able to impersonate them. The cookie became sort of a “bearer token” – anyone who had it could access the protected resources, even if they had stolen the bearer token. In the last few years, companies started a big push to get all the website traffic encrypted over https. This is a laudable goal for many reasons, but trying to do it to secure cookies is not enough.

With localStorage, and now with Web Crypto, you can do better. The server can require additional information to be sent along with each request. We’ve been talking about signing requests with private keys, so now it’s time to put it to use. Each device would have a master key per domain, per user. Each request would be signed (asymmetric cryptography using data = MyApp.sign(data)) with this master private key before being sent to the server. The session ids would still identify the session on the server, but now, the master public key would be sent along in the request to verify that a known and authorized device was, in fact, used to generate a request.

When new devices need to be provisioned to execute (some or all types of) requests on behalf of a user, the general approach is to use existing devices to sign the provision authorization. Policies could be developed (and checked on the server) for how many devices are needed to provision a new one (typically, one device is enough). This signed authorization can be easily communicated via QR codes (camera), bluetooth, sound recording, email or any other side channels. Ideally, the communication should be secured as well, by encrypting it with the new device’s public key, so only the new device can use them.

Finally, going with the principle of “more proof is better than less”, users should also be able to turn on two-factor authentication. Not only can this alert them when there’s a new login is being attempted, but it requires an attacker to use more than one factor. Factors come in three types:

A) something you have (e.g. your phone)
B) something you know (e.g. password)
C) something you are (e.g. biometrics)

Typically, A is combined with either B or C, and usually this is enough. Provided you lock your phone when you walk away from it, the OS encrypts all your data and requires a passcode (with rate-limiting) to access it, A can be enough. You may relax the extra requirements for personal devices.

But for environments like logging in on a website on a public computer, you might want to require A and C on the phone, or A with B on the computer. There would have to be some way to get information from the phone to the computer without an internet signal, and that’s usually done by typing in 6 numbers from an app like Google Authenticator or Authy.

Notice, by the way, that if you use A and C, then passwords are not needed at all. People usually re-use them, and choose really easy ones. So if you’re going to go that route, at least encourage people to use passphases, by requiring several dictionary words with spaces between them.


Use blockchains for data consistency

When a new device is authorized to be used by a user, and a session is authenticated, it becomes a liability. Anyone who steals the device and unlocks it can make requests as the logged-in user through their authenticated session. So when people lose their devices, you need to be able to repudiate the device key. In general, when a computer becomes compromised, you would like to repudiate that computer.

In order to do this, you would have to log into the server with another device and repudiate it. However, if all it took was one device to repudiate the others, the attacker would be able to quickly go and repudiate all your other devices, locking you out of your own account. It would be your word against theirs. You could send over a copy of some ID card, they could conceivably forge one, etc.

Instead, it’s better to have some additional public/private key pairs for just this purpose, pairs you either keep on other computers, print and hide around town, or give to friends. If you’ve lost ALL your devices, you can still restore your account with M of N of these private keys, plus an additional required key K (so that M of your friends can’t take over your account). The key K could be derived from a passphrase which only you know. If you really want to get fancy, you can allow one of several keys, including keys derived from your biometrics or devices hidden in your butt (you know, for guys like Jason Bourne).

In all this, however, you are still trusting the server. Many people, for example, trust third party websites like Facebook to help them authenticate with other sites. The problem is that Facebook then “controls” your identity on those sites, and can cut off you ability to even get back into them. (The author of this article had such an experience with SoundCloud, because since then, Facebook changed the user ids it reported to SoundCloud.)

We are gradually moving to a web where people own their own identity and information. Qbix is actively participating in this movement by publishing open protocols for authentication, which we implement in our own platform. In such a system, people shouldn’t need to rely on a third party website like Facebook to authenticate with other sites. They should be able to prove they control an account on Facebook, or any other site, by using public key cryptography.

If I can get site A to display something at a URL containing a user id, where only that user could be authorized to post, then I can post some text that includes public keys and is signed by corresponding private keys. This text could also name my favorite apps or domains I prefer to use to authenticate with (the new xAuth). When I visit site B, and want to prove that I am also some user on site A (i.e. I control that account), all I have to do is claim this, and the site would seamlessly open my favorite authentication app / domain, which has my private key, and let it answer a cryptographic challenge, like signing a random piece of text. That proves that the same person who is currently signing into site B also controls the claimed account on site A. Site B can then keep this information. In the Qbix auth protocol, we describe extensions to this scheme where user ids can be pairwise anonymous, so you share your identities with some people and not others.

When you want information, such as your authorized devices, to be propagated in a way that no one site controls, you need a blockchain. Blockchains are a drop-in replacement for security that “solves” the problem of having to trust one server and its database, which could get hacked or compromised. It requires many independent servers to validate each transaction. While it’s possible to compromise all these devices, it’s exponentially harder.

Blockchains help solve many problems where multiple validate validate rules of an evolving stream, and prevent forks. Whether you are repudiating a device, or transferring ownership of a token, the validators need to make sure that all the rules are followed, and that the stream hasn’t forked off into valid but conflicting streams.

Merkle Trees (and the more general Merkle DAGs) have a great property that, if you hold the root hash, you can be sure of the integrity of the entire tree. Given any node in the DAG, you can verify that it belongs in the DAG by simply checking its Merkle Branch, which is O(log n) operations. “Belongs in the DAG” means that some process was followed to generate the hashes of each parent given its children, all the way up the root. “Integrity” means that this process followed the proper rules, at least in the opinion of the computers that generated that DAG.

Now, with public and private keys, you can have many computers signing off on the next parent of a Merkle DAG. All those signatures and hashes are combined into one hash. This is, essentially, a “block”. The trunk of such a Merkle DAG is a blockchain. You don’t need to know about everything in the tree, just the trunk. Holding a hash from the trunk, you’d have access to a wealth of information signed by participants and validators, and have confidence that all transactions until that point were validated and checked.

In Qbix Platform 2.0, we will focus more on decentralized governance and security, as a drop-in replacement for Qbix Platform features like Streams. In other words, by building on the Qbix Platform today, you just have to focus on representing, say, a chess game and add chess rules. And when 2.0 comes out, you will be able to just have a blockchain verifying that all the rules in the game were done correctly. All the access control and subscription rules that we have now will go from being a domain-specific language to being defined in a scripting language. People will write rules in Javascript, including rules about access, and rules about adding and removing other rules. Validators on the blockchain will sign off and verify the consistency and integrity of the whole data structure, which you will be able to have just by holding one hash: that of the root.

 

Posted in Uncategorized | Leave a comment

What’s Preventing Crypto for Everyday Payments

Crypto as a medium of exchange

It’s been nearly 10 years since Satoshi Nakamoto launched Bitcoin and 13 years since Ryan Fugger designed the original Ripple network. Crypto-currencies are still nowhere near being a mainstream way to pay for things. Almost no one is using them to make actual everyday purchases, the way they use VISA credit cards, or PayPal, or WeChat in China.

Even though Bitcoin was conceived as a “peer to peer electronic cash system”, it (BTC) became mostly a store of value, a commodity and a means of speculation. RipplePay was originally a peer-to-peer electronic credit system, but eventually Ripple (XRP) became a network for moving money between banks. What happened to the original goal of making a currency that people actually use to transact in everyday life?

Design decisions

Much of what we see is a result of the original architectural design decisions behind the networks and how they work. It’s tough to predict how something will evolve, especially when it’s built on a bold new idea.

Bitcoin’s bold idea was a novel use of Proof of Work to determine global consensus. But, this has led to an arms race of mining power, to the point where the Bitcoin miners now use more electricity than 159 out of 195 countries, all to secure a blockchain that can handle a few transactions per second globally, but may reverse them in the future.

RipplePay’s bold idea was to let people extend interest-free credit to one another via trustlines. Payments between friends require no third party at all, and payments within densely connected communities could be routed through intermediaries. In almost every way, this was the opposite of bitcoin: credit vs value, hyper-local vs globally recoded transactions, etc. But in the end, not enough people were willing to take on the risk of being financial intermediaries, and the project turned into a system of moving money between banks. It also introduced a global value-based token called XRP.

Global consensus

Both BTC and XRP networks today are monolithic global networks, requiring global consensus about every transaction. Ripple in particular is well-suited for settling debts between large institutions, but there is still a missing piece on the level of individuals.

It takes a serious amount of time and money to confirm Bitcoin transactions, making them impractical for everyday transactions. Today, fees range between $20 and $40 to get the confirmation time down to 10 minutes (well, several times that by most vendors who want “multiple confirmations”).

Ripple’s XRP confirmation time is much faster, but it has achieved this by carefully and slowly building out its network of trusted validators. Handling validators that misbehave would still be a time consuming human effort.

However, Ripple’s technology is well suited for settling money between large organizations, without the need for them to trust one another. Intercoin aims to provide the missing piece: letting these organizations enable everyday transactions by users within their community, and between communities.

Adoption by Users

Another issue is that people can’t easily move money in and out of the crypto economy. These days, they have to set up an account with an exchange or portal such as Coinbase, verify their identity, scan their documents, and then wait up to a week as payments to the exchange clear. And this is the “user-friendly” way that replaced the old way of meeting up with someone on the street, for a value exchange that functioned similar to a drug-deal.

In general, it’s hard for people to join a monolithic, global network. For example, to really join the Internet, one would need their own IP address, rent technology, enter into agreements with upstream providers, pay lage sums, comply with regulations, and so on. Most people who “just want to get on the internet” walk into a coffee shop and get on the wifi, or connect their home to an ISP.

Thus, Intercoin is building technology to handle the second half of the equation: giving users the freedom to maintain accounts between communities, control their own data and assets, and move freely between them.

Adoption by Developers

There are tons of apps and websites out there. They adopt payment systems like PayPal or Stripe by integrating simple user interface flows to let people check out or start a subscription. Lots of services provide a simple way to integrate a “Pay” or “Subscribe” button on a website or app. ApplePay and AndroidPay have made it even easier by storing credit card information on the phone, and people can now pay merchants with just a fingerprint to authorize the payment.

In order for crypto payments to become mainstream, apps would have to be able to integrate such buttons, where people would be able to pay with their credit card or bank account or crypto account stored in their phone, or on ApplePay. The whole process would have to be seamless, with the user paying in the currency of their choice, and the merchant being eventually paid out in the currency of their choice.

Reversing Transactions

Another issue is the fundamental difference in how the crypto world treats transactions, as opposed to the existing financial system. With Bitcoin and nearly all other Blockchain-based systems, transactions are final, so let the buyer beware. If the seller absconds with the money and never sends the product, there is no financial intermediary to reverse the transaction. If money is stolen from a wallet, there is no way to reverse it.

But more to the point, this difference presents a major problem for letting people pay with traditional financial instruments such as a credit card or even an ACH transfer, because both are reversible. If you’re running an exchange, or a service like CoinBase, you can get screwed by people who deposit money and then issue a chargeback, as this exchange was.

Wires are about the only mainstream transaction that’s electronic and non-reversible, and they charge significant – but fixed – fees ranging from $20 to $70 per transaction. So, individuals would have to make relatively large wires in order to have low fees in terms of percentage.

Intercoin allows the organizations to do the wires instead of the individuals, and handle chargebacks the same way that PayPal or Visa handles them. Coinbase has a few avenues available to them when dealing with chargebacks, but communities would be able to claw the money back from local merchants accepting their currency, providing a smoother transition to and from the traditional payment system. In a sense, Intercoin is letting each community “run its own open source PayPal”.

Tethers

Bitcoin’s rise in value makes it a good investment and store of value, but a lousy medium of exchange. And, its price volatility would lead many merchants to cash out to local currency, and pay fees (again). They can, however, exchange for tethers which run on top of Bitcoin, and are basically crypto representations of Dollars, Euros, etc. If more people accepted payments in tethers rather than the actual Dollars, etc. then merchants could keep their money in crypto. But that requires adoption of crypto as a medium of exchange in the first place.

Making it Work

On Intercoin, USD and GBP are represented by crypto equivalents similar to how Tethers do it on the Bitcoin ecosystem, but it’s done much more organically: USD and GBP are just another “local currency” which can be exchanged with Intercoin via market makers.

A community in the USA pre-purchases some Intercoin for a low fee by sending a wire to a Market Maker, which exchanges it for Intercoin. When a person comes and pays them $20, the community’s Payment Network begins keeping $20 Intercoin on reserve as an “asset”. The person receives the corresponding amount of Community Coins as a “liability” or “claim” against that Intercoin asset. They go pay some local merchants.

Eventually, the merchants can cash out to Intercoin and exchange it for e.g. some USD Coins, all without paying any fees (just the bid-ask spread between Intercoin and USD Coins). In fact, even this bid-ask spread can be avoided if merchants keep their money in Intercoin and the ecosystem of Local Currencies backed by it. The exchange there is completely without market makers: the exchange rate at any point is just the number of Local Coins in circulation divided by the amount of Intercoin on reserve.

But even one step beyond that, the more businesses and employees accept USD Coins alongside USD, the more the whole crypto economy can proceed without the fees of cashing and in and out of it. For that, the ecosystem needs more adoption by users of crypto as a medium of exchange. And that’s why we need Intercoin.

From the user’s point of view, it works pretty much like in this video:

Posted in Uncategorized | 2 Comments

Economic Analysis of Intercoin

Recently, Qbix launched a new spinoff project called Intercoin. As time goes on, you’ll probably hear more and more about this new decentralized currency platform and the technology that powers it. So it would be helpful to have an article that explains the economic design decisions we made along the way to arrive at its current form. This is that article.

(If you want to read the corresponding article about the technical design, stay tuned for the next one.)

What is money?

Let’s start at the beginning. Money is supposed to be used as a medium of exchange. To a person, the value of any given currency is directly related to how easily (people, businesses) would accept it in exchange for things the person may need or want. Thus, currencies benefit from a network effect: the more people in a community accept it, the more valuable it is to each member. How well a currency fulfills this function determines its value for the community.

Social apps benefit from network effects too. Facebook is much more valuable if all your friends are on it, so can see and be seen – photos, comments, whatever. It’s no surprise that social apps have all incorporated payments into their networks: you can now pay via GMail, Facebook Messenger, iMessages, and more. WeChat in China has all but replaced cash.

When a new payment network appears, people cash in by “depositing” value and receiving internal currency which they then use to transact on that network. Occasionally, people cash out by “withdrawing” value from the network. This is true of casino chips, paypal, banks, or local currencies such as Berkshares, Bristol Pounds and Ithaca Hours. (And it will soon be true of estcoins and the petro).

One size does not fit all

Most currencies today are run by huge communities, whether they are Bitcoin, or federations such as USA, EU, Russia or China. As such, they are one-size-fits-all.

For example, the mysterious creator(s) of Bitcoin made a decision early on that the supply would never exceed 21 million. As a result, as Bitcoin’s network effect grew, the value of each Bitcoin kept growing, too. It became a good store of value — a vehicle for investors and speculators — but that made it a lousy medium of exchange. Who wants to spend their coins on a pizza when the same coins can buy a house a couple years later?

Moreover, Bitcoin is one monolithic global network with an ever-growing ledger that records every transaction ever made. This makes it hard to scale: every block must contain a consensus about all transactions in the world made during that time. This greatly limits the number of transactions the network can handle: Bitcoin can handle 7 transactions a second across the whole world. Moreover, having one monolithic global network with a global consensus means innovation is very limited: even something as trivial as increasing the block size leads to infighting and forks.

Internet scale

The internet is not a monolithic network. It is a network of networks, each one able to determine its own policies, membership, software stack, and so on. Computers on the local network can run apps and accomplish many things among themselves without even needing to go outside the network. However, once in a while a message must be sent over the internet to another network. This is done using a standard Internet protocol such as SMTP (Email) HTTP (the Web), FTP, and so on.

The main idea of Intercoin is to make a currency platform architected like the Internet. Each community can run its own payment network, allowing it to issue and manage its own currency. Payments made locally (eg paying for a coffee) do not need to be recorded on any global ledger. Once in a while, someone may move money across communities (eg paying a vendor in another country) and this is where Intercoin can help by providing

a) a standard way of atomically cashing out of one community and into another

b) liquidity to actually complete the transaction with the lowest fees

Implementation

One straightforward way to do this would be to require each community X to have an account on the global Intercoin network and keep Intercoin on reserve to back their internal economy. When someone cashes out some amount of currency X and into currency Y, it can be accomplished easily:

  1. The payment network of community X receives the local currency being cashed out and takes it out of circulation
  2. Community X pays community Y the corresponding amount of Intercoin on the Intercoin network
  3. The payment network of community Y issues the corresponding amount of its local currency to the recipient.

There are two major ways to do step 2:

2A. One way to do step 2 is to have “trustlines” between communities, similar to how banks settle their accounts at the end of the day. This form of money is credit-based: the more communities there are, the more credit can be extended. Thus, the money supply grows over time.

In this approach, every community will have to maintain credit with every other community, or find a path to route payments through intermediate communities.

The volatility of Intercoin on the markets would probably be lower, because communities would individually determine how much credit they can extend each other.

However, fees for Intercoin transactions would be higher as intermediate communities would take on counterparty risk as they route arbitrarily large amounts via ILP. Or, the throughput for Intercoin payments would be lower as communities cautiously move a little value at a time, so as not to get stuck holding the bag for another community.

2B. Another way to do step 2 is to have Intercoin be a value-based currency, like Bitcoin or XRP in the Ripple network. In this case, we can have a limited supply of Intercoin, and it can run an efficient blockchain consensus algorithm, such as XRP consensus.

In this approach, the local community networks would gladly employ their machines in validating the Intercoin blockchain, since it is in each community’s interest to maintain liquidity for Intercoin transactions. Otherwise their members would get upset — especially merchants who need to import materials from outside.

Without counterparties and trustlines, communities could hold full reserves of Intercoin and support arbitrarily large payments across communities without any of the risks associated with routing payments through intermediaries. As a result, the fees can be brought to zero, and Intercoin payments can be made seamlessly across any Community Coins. (Exchanges to external currencies such as Bitcoin or Dollars would still require market makers and fees.)

Steps 1 and 3 can be done very simply without any market makers or fees. The payment networks X and Y know exactly how much of their currency is circulating, and how much Intercoin they have on reserve (or in the first approach, the total Intercoin credit across all their trustlines). Thus they can divide one by the other to get the exact exchange rate that would apply if everyone cashed out at the same time.

With simple math for currency exchange, instead of markets, foreign exchange fees are eliminated and the exchange rate becomes a predictable function of the community’s money supply. This allows the community to reason about the effects of its monetary policies (such as issuing a Basic Income to every resident) and even extrapolate them into the future.

To paraphrase Milton Friedman, this will make exchange rates everywhere and always a monetary phenomenon.

The Intercoin project will produce the software and resources for any community to issue and manage their own currency. However, unlike Bitcoin, Ethereum and other monolithic global networks, each community has the freedom to run any payment network it wants. Intercoin is leading the way with software and apps such as Basic Income, but over time there can be many different systems and platforms, apps and innovations.

Posted in Uncategorized | Leave a comment

Power to the People

Access to the global internet

With the repeal of Net Neutrality, the national conversation has once again turned to whether Internet Service Providers will begin to throttle traffic or provide preferential treatment for certain sites over others. They have already done this before, and they’ve even injected their own Javascript into webpages you visit.

Today’s US internet service landscape is run by a cartel of too-big-to-fail telcos, who don’t compete with one another in the same cities. Over 100 million users have no choice in provider. These companies are happy to sue cities in state courts to prevent competition from municipal broadband. It’s very hard to start a competitor to them today.

The people rise up

Meanwhile, in cities across the world, there are now efforts for people to build their own mesh networks, which provide high speed connections outside of the large telecommunication companies. Among European efforts are Freifunk in Germany and Guifi.net in Spain. Now, growing movements in cities like New York City and Detroit have started building their own mesh networks as well. Connection speeds within the network are far higher than what people get on the global internet through the ISPs.

If you think about it, there is no reason why signals have to travel to a Facebook server 3000 miles away just to plan a local dinner or outing with friends. Students in a class should be able to connect with one another and collaborate on documents without using Google Docs. The same is true of passengers on a plane, or shipmates on a cruise, or villagers in Nigeria. Local networks should suffice.

Software to unite communities

All this growing local infrastructure unlocks the potential for people to connect with one another on a local level without relying on the ISPs. However, until now there has been a lack of good software to run on this infrastructure. The platforms people use today are almost invariably centralized and require access to the global internet. Facebook, Twitter, Amazon, GMail, Apple, Microsoft, Instagram, WhatsApp, SnapChat, Etsy, Netflix, Uber, you name it – they get venture-funded, get a bunch of people on both sides of a market, and extract rents. They have 1 engineer per million users, or less. Customer support is non-existent. They choose what features you have and what interface you see. They have the people’s data in one place, and sometimes that makes an attractive target for governments and advertisers. In a recent article, we covered this situation in more detail.

Since Qbix released our first apps back in 2011, we has been steadily developing our Platform to power community apps. Now it turns out that this Platform is a perfect fit for the mesh networks springing up, allowing people to enjoy use apps running on local servers, and get high-speed connections to one another, without even needing to connect to the global internet. Once in a while, a message needs to be sent globally, but most of the time, what happens in Vegas stays in Vegas.

We recently paid a company called Antamedia to develop OpenWRT firmware that runs on many popular commercial routers. This firmware now allows communities to run our entire social platform and apps off of their local wifi. Imagine coming to class and having attendance taken automatically, because your phone connects to the local wifi hotspot. Imagine having messages stay private in the classroom, or unlocking rewards from actually being physically present at a concert, and so on.

As time goes on, we are continuing to push the boundaries of what’s possible in this space. Our next goal is to release software to turn an Android phone into a hotspot, allowing people to host a local community on demand wirelessly just from their Android phone. Even when they’re out camping. And the social apps built on our platform should work on all these networks, whether it’s a wifi router, an Android phone, a mesh network, or a website on the global internet.

Money to unite communities

In the last few years, all major social networks have added the ability for users to pay one another. It’s not just PayPal and Venmo anymore: you can now send money in GMail, Facebook Messenger, and now even iMessage. In China, WeChat (the chat application) has become a huge payment network within a few short years. People can now pay with WeChat at restaurants and other businesses across the country, and cash is rapidly becoming obsolete.

With the recent proliferation of interest in crypto-currencies and blockchain software, we got to thinking, why not help communities issue and manage their own money supply? This was our vision for social networking, but translated to the realm money. Instead of having large, global one-size-fits-all currencies (US Dollars or Bitcoin) we could once again give communities the power to determine their own fate.

So, starting 2017, we launched a spinoff company called Intercoin Inc. to do just that. Where Qbix focuses on social networking, the Intercoin project focuses on building the infrastructure to run a secure and resilient payment network. Similar to Facebook and GMail, Paypal and Stripe, we would implement buttons that app developers and website publishers could easily add in order to seamless get paid in the local currency. Prices would be displayed in the currency of the user’s choice, and Intercoin would make cashing in and out of currencies seamless.

Just like Qbix Platform enables local apps where most actions work without access to the global internet, the Intercoin project enables innovations in local community fintech. Basic Income becomes achievable by any community. All citizens can see how the money is being used, spot issues and deal with them as a community. Governance can be done in a democratic manner. The money supply can be controlled by the people instead of the elites, leveraging the wisdom of the crowd.

Decentralizing the gatekeepers

There is another huge benefit of letting communities install and run open-source software. Qbix Platform and Intercoin enables a richer developer system (just like the Web has done). Developers of apps and plugins don’t have to worry about some gatekeepers kicking them out of the App Store, or revoking their API keys while they build a competing product. Each developer can market to entire communities, who can then recommend the app to one another. Communities can do the work of promoting the app to their own members, while the apps would make it easier for the communities to engage their members and give them tools to get together and feel connected.

So that’s the vision: empowering people, uniting communities. When communities need more apps, they can band together and organically raise funds to pay developers to build and maintain them. It’s an open ecosystem where collaboration leads to more and more positive feedback loops.

Communities don’t necessarily have to be local. A person can belong to several communities, including their neighborhood, city, and a poets’ guild. They can get $20 a day Basic Income from the city and another $5 from the poets’ guild. This can help decrease poverty, food insecurity, and lead to increased freedom and prosperity.

Posted in Uncategorized | Leave a comment

The Future of Money

Hi, this is Greg, one of the founders of Qbix. I wanted to write a post to you in first person.

Our company has been busy building apps to empower people and unite communities. The original goal was to build a social networking platform to offer an open-source alternative to Facebook and other centralized sites, letting people host their own networks. In this video (sorry about the sound) you can see my overall vision about decentralization, not just of software but of cellphone signals, power generation (solar panels), etc.

I believe that, over the next several decades, automation will lower demand for human labor. The hardest hit will be low-wage workers around the world, but when 1 person can do the job of 10, wages are bound to go down in that sector. I believe this has already been happening since the 70s (Piketty, et al) and wages going to be an increasingly ineffective way to bring enough money to the people, so our communities will need to institute an Unconditional Basic Income. The money for this income will come from the growing amount of profits Corporations will make from automating their operations. I’ve written about it extensively, in the larger context of the corporate world. I’ve written responses to libertarians and altruists. Those who wonder how we can afford UBI can look at single-payer systems in countries around the world, or right here at home. In those systems, people wind up paying less per capita for basic levels of service, because buyers don’t compete, only sellers do. A basic income would reduce inequality, remove the need for minimum wage laws, and provide better opportunities for everyone. Everyone from Milton Friedman to Martin Luther King advocated for it, but so far, it hasn’t been implemented.

What we’ve already done

So far, we have built a bunch of apps that got a lot of traction around the world. Now we are working to let communities host their own social networks, and let people use apps on their devices to authenticate with various communities (protocol), manage their identities across communities, and use social apps hosted by these communities. They can move their money between communities, spend it within a community (the businesses cash out through the community bank or crypto-network, etc.) and apply to receive UBI from various communities.

I’m currently in the process of building a team to execute on this vision. We’re also documenting our roadmap, writing a white paper, and speaking with an investment bank to do an ICO in addition to the equity funding that we’ve been doing up to this point. Among the people who have been advising us are Ryan Fugger (original creator of Ripple), the lead developer of solid, the creator of Ruby on Rails, the founder of PlanCast, and Warren Mosler (because MMT talks about issuing currency).

Basically, we started building social networks to decentralize the power of Facebook, etc. and we are now looking to actually build the first implementation of UBI as an app within a community currency.

Where this is all going

Our company is connecting 5 different facets (see above). We already have the first one – People – in 110 countries around the world. Now we started to build our first apps for communities to give to their members, and I should be able to invite you to some in the next month or two.

The last 12-18 months, I’ve been working to design a crypto-currency that would support UBI. Something like groupcurrency.org but implemented in practice. Here is the reasoning behind its design, in bullet points:

  • UBI needs to be on the local level, because the cost of living varies from place to place. In a desert environment, water is more expensive than right next to an aquifer, for instance, yet water is a necessary good. UBI doesn’t mean everyone can move to the desert environment.
  • Communities can already issue their own currencies, like the Bristol Pound. This is called “complementary currencies” – fiat of the larger polity is accepted, but inside the community, the currencies circulate. Due to Gresham’s law, people would actually spend them much more readily than fiat. You can also see a transition to internal currencies (backed by fiat reserves) in WeChat/AliPay in China, or how the Millennials venmo each other money.
  • Communities can be more resilient – e.g. Detroit Bucks can pay an employed plumber to do a job. Even if Detroit is hemmorrhaging dollars, most Destroit Bucks will stay in Detroit so it won’t go bankrupt. Same with Greece and the EU, for example. Communities need to be able to have their own fiscal policy.
  • Money is just another “social app”, but for now it’s been a dumb one. The value of a currency comes from network effects, just like an app – stores accepting the currency in exchange for all the stuff you need, is similar to all your friends being on facebook. Payments between communities can be done using the interledger protocol.
  • UBI can be implemented in the community currency without coercive taxation by automatically measuring the local CPI (of food etc) every day, and issuing that amount to everyone in their accounts. UBI comes with “immigration quotas” – the community or its representatives make decisions as to who can move there and start to receive UBI. So the U in UBI does have one condition: “membership” in a community. This is similar to how tribes used to live for centuries, except now they don’t need to work.
  • I believe this will also have a great impact environmentally. By stimulating local economies and issuing UBI to local residents, we counteract the effects of Capitalism that cause people to live further and further away from their jobs in expensive cities like SF, and commute to work and pollute the planet. The phenomenon of commuting to work to sit in a chair is only about 150 years old, and is very wasteful for most jobs.
  • The UBI will invariably cause the community currencies to inflate, and lose value against the fiat etc. But all the prices will be quoted in the fiat, so this shouldn’t lead to confusion. The endless inflation will not cause the same problems as it did in the Weimar Republic, but instead will transparently capture the cost of “how much it costs for everyone to eat”, mitigating shortages and fixing them later. Plus, communities can implement taxation to remove money from circulation if they want to disincentivize certain business activity. Or the Federal government can help subsidize the shortages of fiat reserves that some local communities have, the same way they rescued the banks when they overextended themselves.

So far, our talented team at Qbix has been able to put out some impressive apps that are well-received around the world. We built an open source platform that lets communities host their own social network, and for people to manage their own identities and data privately and securely. Hopefully, in the next few months, we will expand our team and bring on board the right partners (from the fields of security, crypto-currency, and economics) who can help us to achieve our larger vision of empowering people and uniting communities.

Posted in Uncategorized | Leave a comment

What is Money?

The last blog post spoke about two major trends in technology in the last few decades:

  • Centralization of Platforms – leading to a sort of feudalism on the internet
  • The rise of Open Source Software – leading to a democratization of advanced technology

In 2008, a paper published by a mysterious author (or group) called Satoshi Nakamoto led to the launch – and meteoric rise – of Bitcoin, a decentralized crypto-currency whose explicit design goal was that no one company or government could control it. Bitcoin was modeled after commodity money, like gold and silver, and though new coins would appear through “mining”, the rate would slow down exponentially so that the maximum number of bitcoins that would ever be mined would be 21 million.

From 2008 to 2017, the value of Bitcoin went from $0.003 to over $4,000, an increase of 1.3 million times. With an ever expanding set of exchanges, supporting infrastructure, and merchants willing to accept it, the market cap has now steadily grown to above 100 billion US dollars.

Even earlier, a web developer in Vancouver named Ryan Fugger conceived of a different sort of payment network, which he called Ripple. The original idea was based on people extending each other interest-free credit lines, and paying people they didn’t know through intermediaries. These trustlines, as they are called, are a form of credit money rather than commodity money. Credit can expand and contract, depending on how much people are willing to extend to one another.

A Brief History of Money

Before money, people had gift economies, because a coincidence of wants was rare. Then people started accepting gold, or silver, or other rare things, as symbols of payment. Thus, gold and silver became a medium of exchange as more and more people around the world would accept it as payment. It became the first money.

Then institutions and standards arose where people would borrow this money, and promise to pay it back later. The loans could be either secured or unsecured. Either way, the borrowed money was recorded as an IOU, and later were developed into double-entry accounting systems, where each credit had an exactly matching debit. If the borrower defaulted on their debt, then both the credit and the debit disappeared. The property that was held as security for the loan would continue to be used by the original owner, even while they used the borrowed money. However, the lenders did not have the commodity money around for the duration of the loan, so they couldn’t use it.

Banks and other institutions charged with safekeeping of money realized that, most of the time, the money wasn’t being used. So they started started to leverage their credit with the community and issue their own currencies, which led to Representative Money. This type of money was easier to carry to the marketplace and quickly drove out the commodity money from circulation, in a phenomenon knows as Gresham’s Law. The actual Representative Money was additional to the already-existing commodity money, and it was now being issued by the banks. This came to be known as Fractional Reserve Banking, and without oversight many banks flooded the market with poorly backed banknotes. Some banks over-leveraged their credit so much that it caused a run on the bank, and widespread financial panics. This led to the creation of the Federal Reserve System in the United States, which represented a return to Central Banking for the country.

Today, every country has moved away from commodity money and towards a system of Fiat Currency. Fiat Currency is legal tender that is backed by the government which issued it. Legal tender means it’s able to extinguish all debts, public and private. This is usually enforced by laws, which the courts adhere to, so that courts will not compel anyone to pay a debt in any other way. If you go to a restaurant in the USA and eat without giving them anything, they can’t force you to pay by credit card. They have to accept cash.

Today’s circulating money is primarily credit money issued by banks and other financial institutions, with some of that money issued by the government. In the US, official national money is minted as specie (coins) or paper dollars by the Federal Government through the Treasury, a power enumerated in the country’s Constitution. However, the vast majority of money in circulation is not this M0 money. Instead, it’s M1 and M2, including all those credit cards you use, all those paypal transactions, and the trillions moving around the world as nothing but bits in a computer. Every so often, banks settle liabilities between their accounts using systems like ACH which is run by the Federal Reserve System.

Fiat currencies and Fractional Reserve Banking has allowed the money supply to grow and shrink to accommodate the needs of people, businesses and industries. But, ultimately, monetary policy is in the hands of governments, which sometimes leads to inflation, hyperinflation and various other issues. In general, both inflation and a credit crunch are runaway effects, which have the potential to disrupt the economy for a long time. This is a great overview of business cycles under the current system. (And here are two rap songs.)

Decentralized Ledgers

What’s appealing about Bitcoin and other decentralized currencies is that people are able to transact, store value, etc. without anyone being able to stop them. You can think of it as a decentralized Paypal. The bitcoin ecosystem has grown tremendously, and its market cap now exceeds $100 billion. The guarantees of safety and security come from the cryptographic systems that underpin their operation. The lack of control, however, is not so easy: the system depends on a property called Byzantine Fault Tolerance to achieve global consensus in the face of many (possibly up to 50%) dishonest participants at any given time.

A question might arise, why does Bitcoin need global consensus, if it’s decentralized? After all, if I pay you, we can just both cryptographically sign it so that everyone who sees the transaction knows we approved it. The reason is that digital currencies which try to implement commodity (value) money suffer from the double-spend problem. Any actor can pay two people with the same coin, and “neglect to mention” that they no longer have the coin.

The answer, in technological terms, became the Blockchain. It’s basically an ever-growing ledger of transactions, that is periodically signed by some validators and replicated across all the machines in the network. The idea of a chain of cryptographically signed blocks, each one referring to the previous one, is not new or unique. It’s a special case of a Merkle Tree, and it’s used in decentralized systems with no global consensus necessary, such as the Secure Scuttlebutt protocol.

Bitcoin used Proof of Work both as a way to sign the blocks in the blockchain, and a way to “randomly” select the next validator. The ideas is that each transaction will eventually be approved, if not by one validator then by another. The problem with Proof of Work has been the escalating arms race that has wasted tons of energy to the point where validating each transaction takes as much electricity as running an entire household for a day. It’s been called the world’s worst database. And now, in practice, control of the validation has been concentrated in the hands of a few mining pools in Asia, in areas where electricity is cheaper. Bitcoin’s promise of decentralization may have dissipated, leaving room for more innovation.

A global consensus is only necessary to solve the double-spend problem. With credit money and trust-lines, this problem simply doesn’t exist. You are extended credit by those who trust you up to a certain amount (your friends, VISA, etc.) and your balance yo-yos back and forth, but there is no danger of “spending the same coin” because every trustline is separate from every other one.

That was the original idea behind Ripple. But, the project eventually found it hard to get adoption because people can’t be on the hook for large amounts of money for their friends. That’s historically been a job for banks, payday lenders and other financial intermediaries. For small amounts, however, trustlines and sidechains are a major area of research in the Lightning Network and other projects. The Interledger Protocol allows payments between ledgers, trustlines, or anything else, making it possibly the glue between all the different emeging technologies.

Ripple was taken over by the NewCoin project, and has reimagined global consensus without proof-of-work. They got funding from Andreessen-Horowitz, a forward-thinking VC firm that also made investments in Keybase and other crypto companies. They currently work with banks to replace ACH transactions (moving from legacy SFTP systems to their XRP token) and already move more money than the bitcoin network.

Money is an App

If you think about it, the value of a currency comes from network effects, just like any social app. You care about currency X if merchants accept X in exchange for the various things you need and want. Similarly, you care about app Y if your friends are on Y and you can do whatever you need/want with them through Y.

When viewed in this manner, you can see why community currencies would be the perfect fit for the Qbix Platform. It would be like Bristol Pounds, but much smarter. It could implement Unconditional Basic Income as a feature. It could allow communities to pitch in to finance drug research and production, or develop open course materials for students.

If you look at the past few years, you see the meteoric rise of cryptocurrencies. But also, look at the growth of payments through WeChat and AliPay in China, or Venmo here in the USA. These are essentially community currencies backed by fiat reserves. We can use decentralized ledgers powered by e.g. Ripple’s technology to let any community run their own currency. More on that next time.

Posted in Uncategorized | Leave a comment

The Future of Decentralization

Posted in Uncategorized | Leave a comment

Centralization and Open Source

Act 1: Paradise Lost

The internet, from its very inception, was conceived as a decentralized network with no single point of failure. Early apps, such as Email, IRC and FTP, were built around open, decentralized protocols where anyone could build a server or a client.

But then, centralized services started to emerge. Driven by economies of scale, the venture capitalist model involved funding these companies until they would capture a new market, sometimes establishing a monopoly and extracting rents. America Online was an early example, but was eventually disrupted by the decentralized Web. Today, Facebook and Google are examples of giant social platforms where people prefer to post their data instead of on their own website. They use (and sometimes abuse) vast troves of personal information that people volunteer to them, or that they surreptitiously collect. Even hardcore capitalists might blush at the effects of such centralized monopolies: Peter Thiel, who famously invested in Facebook, insists competition is for losers, but not everyone feels that way.

The game changed even more when broadband connections arrived. Now, people’s internet was “always-on”, and networked apps started assuming you could either be “online” or “offline”. If you were online, it meant their server was reachable on the global internet, and thus all the signals could go through it. Never mind that you might be on a slower connection in India, or on a cellphone in rural Africa. Facebook will build infrastructure for you, so long as all your signals travel through their server farms and your data is stored in California – an offer India ultimately rejected. No matter, Facebook developed drones, and Google developed balloons, to bring access to the global internet to Africa. Why *global* internet access? Because the signals will travel through Facebook and Google servers. Their business models are good enough that they can pay for the infrastructure.

The slew of services, in the last two decades since Broadband became a thing, have been centralized. Facebook, Twitter, Amazon, GMail, Twitter, Apple, Microsoft, Instagram, WhatsApp, SnapChat, Etsy, Uber, you name it – they get venture-funded, get a bunch of people on both sides of a market, and extract rents. They have 1 engineer per million users, or less. Customer support is non-existent. They choose what features you have and what interface you see. They have the people’s data in one place, and sometimes that makes an attractive target for governments and advertisers.

Act 2: Open Source

In the last few decades, another movement has been growing in software – open source. Modeled on how science progressed, it allowed people to build on each other’s work, and collaborate on an ever-growing snowball. The original free software movement was fueled by the Free Software Foundation and its licenses (GPL, GPL2, etc.) which made an ingenious use of copyright to implement “copyleft“. But over time, it came to encompass less restrictive licenses that let people do anything they wanted with the source code (MIT, BSD, Apache, etc.)

The resulting projects – from operating systems (Linux, BSD), to languages (PHP, Python), to databases (MySQL, Postgres), to web servers (Apache, NGinx) to web browsers (Mozilla, WebKit), have all created tons of value, far beyond their counterparts in corporate silos (Windows, Internet Information Server, Internet Explorer). They have lowered the barrier for anyone to contribute to the growing snowball, and as a result, the products have become so stable and resilient that everyone has moved to them. (Safari and Chrome are based on WebKit. MacOS is based on BSD.) Economically, supporting open source (and defending them against software patents) started to make sense for large corporations.

The Web itself – arguably the most widespread and successful application using the internet, and the world’s largest development platform – has spawned so much wealth creation partly because it has always been radically open source. Every browser since the original Mosaic has had a “View Source” command. Developers could learn HTML, CSS and Javascript by looking at other websites, and downloading free software libraries. Thus the ecosystem took off, and has led to trillions of dollars in value around the world.

Because of the open nature of these ecosystems, anyone can take the full power of Linux, or the Web browser, and build something on top of it. As a result, Linux has been adapted to run on some toasters, while Windows still runs only on a particular x86 architecture. If the open source model was applied to drugs, instead of the current patent restrictions, we may have had a lot of innovation and cures for the long tail around the world. Clay Shirky has recently given a great TED talk on the trade-offs between Institutions and Collaboration.

Cliffhanger: Stay tuned for the thrilling conclusion!

Thus we see two major trends develop over the last few decades:

  • Increasing centralization
  • The growth of open source software

Today we live in a world where most people have moved away from general-purpose computers and open protocols to use ever-more locked-down devices and ecosystems controlled by a single company. But it’s also a world where more and more of the software that’s developed is released as Open Source. Even that famously anti-open-source company, Microsoft, has now “left the dark side” and open sourced its flagship development environment, Visual Studio. Yet most of that open source today is currently hosted on GitHub, a centralized site (and the successor to SourceForge) that’s home to myriad software projects.

Why do so many open source projects choose to have their home at github.com/myproject? Why do so many brands tell people to go to facebook.com/mybrand, or twitter.com/mybrand?

Technology – or lack thereof – is the main reason things become more centralized. This also explains a lot of the debate between Anarchists and Statists. Economies of scale attract more resources, until a new disruptive technology is able to decentralize the innovations for everyone to access. This is the goal of companies like OpenAI, which believe that datasets and innovations in artificial intelligence should be widely avalable to everyone, and not just concentrated in the hands of a select few.

Today, there are movements to decentralize social networking, and tore-decentralize the web. But things are just beginning, and these movements are only now starting to get the same kind of attention that Bitcoin and crypto-currencies got when they decentralized money. Having discussed the past, the next blog post will discuss the future of decentralization. Stay tuned!

Posted in Uncategorized | 3 Comments

Properly Valuing Contributions

A major part of our mission as a company is to empower people. So naturally, at Qbix, we give a lot of thought to the best ways of doing that. This post describes a compensation model that we have designed, which clearly rewards people for their contributions, and actually drives everyone to compete in how much they can contribute to a product’s bottom line.

We call it the Qbix Compensation Model. Feel free to use it at your own company, and share it with others.

What Usually Happens

Say you’ve got an app or website that’s generating some revenues, and you’d like to grow them. As the owner and biggest stakeholder, you have to

  1. keep obsessing and coming up with a list of ideas to try
  2. raise a budget to pay your employees (developers, designers) to build it
  3. set up a system to measure the impact of these new features
  4. launch the features and A/B test their impact
  5. finally, pay everyone on time and hope the features justified the expense

When you have a project, especially an open source project, major contributions can come from anywhere. Wouldn’t it be nice if people with the right skills could come and help you grow your revenues? Wouldn’t it be great if they’d be excited to keep doing it? They just might, if you could compensate them for it. Behold:

Qbix Compensation Model

So you’ve got a product that’s already generating $X / week in profit after expenses. It’s pretty good, but you’ve got a lot of room for improvement. A developer comes along and offers to build a feature that could potentially double or triple your revenue. Maybe they can improve your user engagement, retention, or viral coefficient. Maybe they can design some cute digital goods or useful features that can be purchased in the app. If your project is open source and you actually publish your metrics online, the right person might just come along to help you improve them.

The key idea behind the QCM is that your product’s revenue is the bottom line that you obtain from multiplying all the little factors that go into it. Double day-7 user retention from 20% to 40%, for instance, and you may just triple your revenue. Introduce a popular in-app purchas, and suddenly your revenue may jump by a factor of 10. Each little contribution grows the pie.

So when that developer comes along with an idea and pitches it to you, you can make them a good offer: set them up with a copy of your source code (protect yourself with NDAs, provisional patents and whatnot) and let them do all the work of building, testing, and contributing this new feature. Then you carve out a random sample of your audience to beta test how this change will affect your metrics.

Once the new feature launches, find the average revenue per user for the two weeks before the launch. (Here at Qbix, we prefer to calculate as much as we can on a weekly basis, because months vary in size.) Then for weeks 3 and 4 after the launch, calculate the average revenue for the users who experienced the new feature. Did it increase?

Extrapolating to your total number of users, you can now predict that you will make $Y if you rolled out this feature to everyone. After all, if your sample was large enough and truly representative of the overall user base, the changes you see should hold across the board. Feel free to gradually increase the number and measure the above again.

The Formula

So say your project was bringing $X / week in revenue before you accepted the contribution, and now thanks to this change it’s generating $Y / week. The number C = $Y / $X is the factor by which your revenues grew. It is what determines the value of the contribution to your project going forward. This improvement will probably compound with other improvements people make down the line, so it will constantly be contributing to your bottom line (until, that is, something eventually comes along to disrupt your business model).

So now, how much should you compensate the contributor? Well, pick the maximum amount you’d like to dilute your app’s revenue, for example 10%. You would offer the following to the contributor:

$Z = ( $X ) ( 10% ) ( C – 1 )

Real Examples

Suppose you were making $1000 a week from your project. You accept a contribution, beta test it, etc. and now, four weeks later you’re making…

  • $2000 a week. So the contributor gets $100 / week
  • $3000 a week. So the contributor gets $200 / week
  • $100,000 a week. So the contributor gets $9,900 / week.

As you can see, the more you make, the closer the contributor’s revenue gets to 10% of yours.

Ownership and Risks

Some investors, upon seeing this model, might be a little bit uneasy. After all, did you just effectively “give away 10% of your company” to this contributor? No.

First of all, this kind of partnership should be done per project, not per company. If the feature is re-used in other projects, the same arrangement can apply.

Secondly, if other people contribute more features that increase the pie, then they get a part of the pie. If you (or your employees) wind up contributing more down the line, then you “claw back” your share of future revenues from 90% back to something closer to 100%. The cool thing is, all of you are always motivated to compete and see who can grow the pie more. As long as the pie grows (that’s what the beta testing is for!) no one’s revenues go down.

Finally, you can also limit the period that the contributor gets paid to, say, 5 years. It’s still a pretty attractive option for a developer. Imagine, as a developer or designer, coming across a SaaS company whose product you are very familiar with, that openly publishes its metrics. They honestly reveal that some metrics could use improvement, and you have a great idea for how to do it. You get in touch and – instead of hiring you to for some full-time position doing mostly drudge work inside the company – they agree to pay you for your contribution, if it works out. $1,000,000 / year x (C–1) for every factor C that you improve a certain KPI metric.

So you sign an NDA. Maybe they file a provisional patent on your idea, so your uncle doesn’t sell it to the next company. They set you up with a copy of their source code, and give you access to their testing environment. You build the feature, document your work, they run the tests and see you improved it by 10%. You make $100,000 / year passively. They’re happy, and you’re happy. You got a job done, got fairly compensated, and can go back to doing what you want in your life.

People live lives. Companies build products.

The above is our motto. We believe that products can grow better faster if contributions are allowed to come from anywhere. A person with a great idea or ability should be able to contribute to a project, get compensated in a clear way, and move on with their life.

It used to be that people worked at the same company for decades. Today, technology has come a long way and changed the face of labor. As companies develop more automation and more AI, such project-based work will increasingly become the way things get done. This is the future of work.

Starting a New Project?

Are you just starting out? Use this model to fairly compensate people for their contributions, instead of always thinking in terms of “bringing them on board” permanently. Get things done. Get the designs completed. Get the minimum viable product ready. Go raise venture capital to grow the business – or not. After all, when you can attract people from all over the world to contribute remotely to your project, you may not need all that much capital up front.

But wait, what is $X? It can’t be zero, because then you can’t divide by it. (The very first founder would then have brought “infinite” value to the company.)

Instead, just start a “product line” inside your company, and say that it’s making $1 / week. You can after all purchase your own crappy product, which does nothing. Now, pick your 10 cents per C–1 or whatever you want, and offer everyone compensation from the QCM calculations in exchange for contributing to your growing snowball. If you sit back and do nothing except attract people to your project, your company will still get 90% of the revenue from that product line. That revenue might be $1 million / week, and the contributors each did their part. They’re happy, and you’re happy. Things get done. And society moves forward.

Posted in Uncategorized | 4 Comments

Four Million Downloads!

Happy New Year, everybody. Coinciding with the start of 2016, we hit another milestone: our four millionth download! Both Calendar and Groups enjoy nearly a 5 star rating in the app stores. To celebrate, we put together an interactive visualization of all our users around the world, complete with sample reviews in their native languages. Feel free to rotate the globe below, tap on some countries and explore the reviews. The animated pings you see represent people actively using our apps around the world.

{{&tool “Qbix/reviews” app=”Calendar” countryCode=”US”}}

Of course, we are pretty proud of reaching this milestone. But our biggest announcements are still ahead this year. There is a lot of work to do on our road to empowering people and uniting communities. Those of you who’ve been following us know that 2016 is going to be the breakout year for Qbix.

Posted in Uncategorized | 10 Comments