Modern Security Practices for Web Developers

When you’re building a web application, your servers become responsible for managing other people’s data. Thus it falls on you, the app developer, to implement security measures to prevent data breaches. This is especially true if your application gets popular. The more people, the more valuable the data, the more costly the data breaches can be.

Just recently, we had the Equifax breach on 143 million americans, the Alteryx breach of Experian data on 235 million consumers, the Yahoo breach compromising 3 billion accounts, and so on. It seems this has become a routine occurrence. That’s what’s motivated this article.

Web Security in 2018

In 2018, there are plenty of good guides and resources on web security. This guide is different. It describes new techniques that have become possible on the Web, which eliminate large swaths of attacks. It lists principles that you can apply in your own apps to make them progressively more secure.

As developers of a large, open-source platform for running social networks and payment systems, we have had to confront security issues head-on. We know that popular open-source projects can be especially vulnerable since anyone can see and analyze the back-end code. This motivated us even more to follow Kerchkoff’s principle, that the security of a system should rest on its design, and not its obscurity. So let’s dive in.

The Principles

  1. Do not store sensitive data unencrypted at rest
  2. Do not store keys to encrypted data in the same place
  3. Require cryptographic signatures for assertions
  4. More proof is better than less proof
  5. Use blockchains for data consistency


Do not store sensitive data unencrypted at rest

The easiest way to prevent sensitive data from being stolen from the server is not storing it there. Often, it can be stored on the user’s client. At the very least, this limits the data breaches to only the compromised clients. These days, the makers of operating systems (especially mobile operating systems) have caused user data to be encrypted by default, and the user unlocks their phone or laptop using a biometric id or passcode. Thus, to get the data, the attacker would have to access the user’s unlocked device.

Of course, sometimes you need this data on the server side. People may want to access and use their own information when logged in. External APIs may want to use an oauth token or even a username and password. If you’re going to store sensitive data – whether on the client or the server – consider encrypting it.

It used to be that data stored on the client could not be reliably encrypted. But in the last few years, the Web Crypto API has been finalized and is now supported by all major browsers including iOS Safari and Android Chrome. It gives you access to all the cryptographic tools you need to secure your data on the client, including random number generation, deriving and handling private keys, and the browser can now make sure the keys aren’t exported out of the client side website context.


Do not store keys to encrypted data in the same place

For really sensitive data on the server, encrypt it when storing it in the database. Many database vendors actually have transparent solutions for this. To decrypt the data, you will need:

  1. The app’s private key
  2. The private key of the user who encrypted the data

(See the “more proof is better than less” principle, below.)

None of these keys should be stored in the same place as the database, so the hacker would have to compromise more places in order to get the decrypted data. You can even split the keys up and store parts in different places. In any case, once you obtain the keys, do not save them or export them anywhere, but store them only in transient operating memory.

The private key of the user who encrypted the data can itself be encrypted and unlocked by using a valid user client device. For each device, another copy of the key would have be stored, encrypted with that device’s private key. Each device is identified to the domain by its corresponding public key.

It’s a little-known fact that most modern browsers allow you to save Web Crypto keys on the client using the new IndexedDB API. However, once again, you don’t want to store these keys without encrypting them first. Otherwise, anyone with access to the browser’s database will be able to see the keys and steal them, allowing them to take actions on the user’s behalf.

The typical approach is to have a master key which is used to encrypt various things on behalf of the owner. This master key is then stored encrypted by one or more access keys, which are derived from a user’s passcode or biometrics (such as like their fingerprint or Face ID). A user can have one access key for each of their fingers, for instance. This way, if the user adds another password or finger, they don’t have to re-encrypt all the things with it as well.

The access keys should not be stored on the device! They are derived every time from the user’s password or biometrics, and used to decrypt the master key every time it’s needed. The Web Crypto API can prevent the export of these keys using Javascript, and if your code is running inside a trusted browser or mobile authentication session then other apps can’t get at it.

If other Javascript is running in the website, it might misuse the keys, so it’s nice to only expose a few functions like MyApp.sign(data) . Load the script defining these functions via the first script tag in the HTML document and use the new Object.freeze() method (from ECMAScript 5 which is supported in all modern browsers) to prevent other Javascript code from replacing the functions.

Ideally, the user’s passcode and biometrics should be entered in a trusted sandbox mode that other code running on the computer (such as other Javascript, or screencasting software) can’t access. Sadly, there is still no standard way to isolate such a sandbox on the Web. Fingerprints aren’t going to be intercepted by keyloggers and screencasting, but passwords can be. In the future, such a mode may be developed by Operating System vendors, and users would recognize it because it would display some photo or phrase that only the user would know (because they entered it when setting up their device). This is the weakest link with the Web (and most operating systems) in 2018: anyone can spoof a password prompt on iOS, MacOS, Windows, and the Web. Biometrics are better. For now, you can just simulate it with an iframe running on a domain of the user’s choice, which the user trusts. When the user puts their keyboard focus in the password area, they would see their secret phrase in the iframe. (There used to be a standard called xAuth that would have allowed the enclosing site to find out this domain, but it has fallen by the wayside.)

Long story short, it looks like this:

  1. user inputs passcode or biometrics (in a trusted sandbox)
  2. derive access keys from that (in the same trusted sandbox)
  3. use them to decrypt master key for that domain stored by the user agent on the user’s device
  4. use master key to decrypt information sent to the user, and sign information sent by the user


Require cryptographic signatures for assertions

This is all well and good, but what if the server doesn’t send the right Javascript? On the Web, we constantly have to trust the server to send the resources. Content Delivery Networks that serve the same files for thousands and millions of sites are juicy targets for hackers to modify files across many sites if they can.

It would be nice if there was a cryptographic HMAC of the file we could obtain in a side-channel, like from someone (like ourselves) who downloaded the resource from that URL and reviewed it. Luckily, the Web now allows any site to easily add Subresource Integrity checks to do just that.

However, the top-level resource that you request via a URL can still be changed, even if you load it via https. Perhaps the server was compromised and hacked. Perhaps a certificate authority was compromised and now someone used DNS rebinding to serve you a fake site. Having browsers be able to verify top-level resources against an HMAC would be really useful, but the current breed of browsers doesn’t support it. This is the second weakest link on the Web. Perhaps one day it will be fixed with IPFS and content-addressable URLs. For now, at Qbix we plan to release a browser extension that parses the top-level location URL for some appended hash via special characters (##hmac=...) and, if present, verifies it or rejects loading the document.

Responding to a request with a file is an example of an assertion: “this is the resource at this URL.” The HMAC acts as a cryptographic signature of this assertion. But there are many other assertions you can sign, and the general principle is to require signatures of an assertion.

If the entity checking the signatures of the assertion is the same entity issuing the assertion, then it can just keep a private key around. For example, it’s useful to sign session ids that your servers generates (use an HMAC, and include that signature as part of the session id). This way, your network can right away reject bogus session ids (such as “blabla”) without hitting the database or doing any I/O at all. Even the computers acting as gateways into your network can keep the private key and reject all requests that don’t contain a properly signed session key.

This enables another aspect of security: uptime. Your network can handle many more requests if unauthorized requests are stopped early on without putting a strain on expensive resources. Users without a session may be able to request static assets from a CDN, but your app servers won’t be weighed down. And within authorized sessions, you can implement quotas for using resources, throttling users and preventing them from abusing the network, or even charging them for their usage. All of this is possible simply from requiring signatures for assertions.

If the entity checking the assertion is a not necessarily going to be the same one that issued the assertion, then you can use public key cryptography: the assertions should be signed by the entity’s private key, and anyone with the public key can verify it. For more efficiency, or for off-the-record messaging, you may want to use a hybrid cryptosystemwhere you bootstrap with private keys but generate symmetric keys that can be shared per-session or per-message.


More proof is better than less proof

It’s a simple principle: you don’t become less secure by requiring more proof (of identity, permissions, etc.) before granting a request.

Since the early days of the web, cookies were used to transmit session ids. Until the advent of localStorage, that’s as far as people could really go. If you sent a request with the correct session id, your request was executed in the context of the user who authenticated with that session. Thus, lots of attacks were developed, including session fixation attacks where the attacker would gain access to the user’s session, and be able to impersonate them. The cookie became sort of a “bearer token” – anyone who had it could access the protected resources, even if they had stolen the bearer token. In the last few years, companies started a big push to get all the website traffic encrypted over https. This is a laudable goal for many reasons, but trying to do it to secure cookies is not enough.

With localStorage, and now with Web Crypto, you can do better. The server can require additional information to be sent along with each request. We’ve been talking about signing requests with private keys, so now it’s time to put it to use. Each device would have a master key per domain, per user. Each request would be signed (asymmetric cryptography using data = MyApp.sign(data)) with this master private key before being sent to the server. The session ids would still identify the session on the server, but now, the master public key would be sent along in the request to verify that a known and authorized device was, in fact, used to generate a request.

When new devices need to be provisioned to execute (some or all types of) requests on behalf of a user, the general approach is to use existing devices to sign the provision authorization. Policies could be developed (and checked on the server) for how many devices are needed to provision a new one (typically, one device is enough). This signed authorization can be easily communicated via QR codes (camera), bluetooth, sound recording, email or any other side channels. Ideally, the communication should be secured as well, by encrypting it with the new device’s public key, so only the new device can use them.

Finally, going with the principle of “more proof is better than less”, users should also be able to turn on two-factor authentication. Not only can this alert them when there’s a new login is being attempted, but it requires an attacker to use more than one factor. Factors come in three types:

A) something you have (e.g. your phone)
B) something you know (e.g. password)
C) something you are (e.g. biometrics)

Typically, A is combined with either B or C, and usually this is enough. Provided you lock your phone when you walk away from it, the OS encrypts all your data and requires a passcode (with rate-limiting) to access it, A can be enough. You may relax the extra requirements for personal devices.

But for environments like logging in on a website on a public computer, you might want to require A and C on the phone, or A with B on the computer. There would have to be some way to get information from the phone to the computer without an internet signal, and that’s usually done by typing in 6 numbers from an app like Google Authenticator or Authy.

Notice, by the way, that if you use A and C, then passwords are not needed at all. People usually re-use them, and choose really easy ones. So if you’re going to go that route, at least encourage people to use passphases, by requiring several dictionary words with spaces between them.


Use blockchains for data consistency

When a new device is authorized to be used by a user, and a session is authenticated, it becomes a liability. Anyone who steals the device and unlocks it can make requests as the logged-in user through their authenticated session. So when people lose their devices, you need to be able to repudiate the device key. In general, when a computer becomes compromised, you would like to repudiate that computer.

In order to do this, you would have to log into the server with another device and repudiate it. However, if all it took was one device to repudiate the others, the attacker would be able to quickly go and repudiate all your other devices, locking you out of your own account. It would be your word against theirs. You could send over a copy of some ID card, they could conceivably forge one, etc.

Instead, it’s better to have some additional public/private key pairs for just this purpose, pairs you either keep on other computers, print and hide around town, or give to friends. If you’ve lost ALL your devices, you can still restore your account with M of N of these private keys, plus an additional required key K (so that M of your friends can’t take over your account). The key K could be derived from a passphrase which only you know. If you really want to get fancy, you can allow one of several keys, including keys derived from your biometrics or devices hidden in your butt (you know, for guys like Jason Bourne).

In all this, however, you are still trusting the server. Many people, for example, trust third party websites like Facebook to help them authenticate with other sites. The problem is that Facebook then “controls” your identity on those sites, and can cut off you ability to even get back into them. (The author of this article had such an experience with SoundCloud, because since then, Facebook changed the user ids it reported to SoundCloud.)

We are gradually moving to a web where people own their own identity and information. Qbix is actively participating in this movement by publishing open protocols for authentication, which we implement in our own platform. In such a system, people shouldn’t need to rely on a third party website like Facebook to authenticate with other sites. They should be able to prove they control an account on Facebook, or any other site, by using public key cryptography.

If I can get site A to display something at a URL containing a user id, where only that user could be authorized to post, then I can post some text that includes public keys and is signed by corresponding private keys. This text could also name my favorite apps or domains I prefer to use to authenticate with (the new xAuth). When I visit site B, and want to prove that I am also some user on site A (i.e. I control that account), all I have to do is claim this, and the site would seamlessly open my favorite authentication app / domain, which has my private key, and let it answer a cryptographic challenge, like signing a random piece of text. That proves that the same person who is currently signing into site B also controls the claimed account on site A. Site B can then keep this information. In the Qbix auth protocol, we describe extensions to this scheme where user ids can be pairwise anonymous, so you share your identities with some people and not others.

When you want information, such as your authorized devices, to be propagated in a way that no one site controls, you need a blockchain. Blockchains are a drop-in replacement for security that “solves” the problem of having to trust one server and its database, which could get hacked or compromised. It requires many independent servers to validate each transaction. While it’s possible to compromise all these devices, it’s exponentially harder.

Blockchains help solve many problems where multiple validate validate rules of an evolving stream, and prevent forks. Whether you are repudiating a device, or transferring ownership of a token, the validators need to make sure that all the rules are followed, and that the stream hasn’t forked off into valid but conflicting streams.

Merkle Trees (and the more general Merkle DAGs) have a great property that, if you hold the root hash, you can be sure of the integrity of the entire tree. Given any node in the DAG, you can verify that it belongs in the DAG by simply checking its Merkle Branch, which is O(log n) operations. “Belongs in the DAG” means that some process was followed to generate the hashes of each parent given its children, all the way up the root. “Integrity” means that this process followed the proper rules, at least in the opinion of the computers that generated that DAG.

Now, with public and private keys, you can have many computers signing off on the next parent of a Merkle DAG. All those signatures and hashes are combined into one hash. This is, essentially, a “block”. The trunk of such a Merkle DAG is a blockchain. You don’t need to know about everything in the tree, just the trunk. Holding a hash from the trunk, you’d have access to a wealth of information signed by participants and validators, and have confidence that all transactions until that point were validated and checked.

In Qbix Platform 2.0, we will focus more on decentralized governance and security, as a drop-in replacement for Qbix Platform features like Streams. In other words, by building on the Qbix Platform today, you just have to focus on representing, say, a chess game and add chess rules. And when 2.0 comes out, you will be able to just have a blockchain verifying that all the rules in the game were done correctly. All the access control and subscription rules that we have now will go from being a domain-specific language to being defined in a scripting language. People will write rules in Javascript, including rules about access, and rules about adding and removing other rules. Validators on the blockchain will sign off and verify the consistency and integrity of the whole data structure, which you will be able to have just by holding one hash: that of the root.

 

This entry was posted in Uncategorized. Bookmark the permalink.

Comments are closed.