TLDR: As has been written before, writing web applications that don't trust the server (e.g. by adding client-side encryption, or storing all data in local storage and never sending it to the server) wasn't possible before, because the browser requests the code of the web application from the server every time you open it and executes it trustingly. We propose a solution, analogous to Binary Transparency on the desktop, using Service Workers, and have implemented it for Airborn OS.
Introduction
Web applications, unlike desktop and mobile applications, aren't normally installed: the browser downloads the application from the server every time you open it. As well as advantages, this has some disadvantages: for example, they don't normally work offline. I say "normally" because in modern browsers, web applications can partially "install" themselves with Service Workers. Service Workers sit between the web application and the server and cache responses so that the app keeps working offline, among other things.
There's also a security disadvantage to downloading the web app every time, if you want to verify whether you trust its source code: even if you read all the source code on GitHub, the server could send you a new version tomorrow with no easy way to notice.
Furthermore, the server could send just you a different version of the web application tomorrow. There's no way to check that you're getting the same version as everyone else. This also applies to desktop applications, and the solution is Binary Transparency, such as Firefox is planning to implement. Basically, every release will be put in a public log, and the Firefox Updater will check that the new version matches the version in the log.
Now that we know all the pieces of the puzzle, the question arises: can we use Service Workers to achieve Binary Transparency? It turns out we can, with some limitations.
Implementation
When the browser requests a file (/ or /main.js, say) the request goes to the Service Worker. If the SW has a response in cache, it responds with that immediately. It also requests an up-to-date version from the server. If it differs from the version in cache (i.e. the web app has been updated), it sends a request to the GitHub API for a list of files with sizes and hashes on GitHub. If the response matches, the cached version is updated and the user is shown this message:
That link to GitHub is not just a generic link to the repository: it's a link to the specific commit with the same code that we received from the server (the Service Worker verified that). This makes it easy to check whether you trust the code of the web application.
If the response doesn't match, the user is shown this message:
Note that it's not a warning because the user isn't really at risk, because we still have the old version in cache. If that's not the case (this is the first time the user opens the web app) this error is shown instead:
Note that while we chose GitHub as a public log, this is not a requirement. You could check the files against another (possibly cryptographic) log, or check that they are signed with a given public key, etc.
What does this guarantee?
If you trust or verify the Service Worker the first time you get it from the server, this guarantees that:
- Unless you've gotten a message saying Airborn OS has been updated*, you're still running the same code as when you first opened Airborn OS
- Unless you've gotten a message saying you should check your trust in Airborn OS*, you're running the code on GitHub, and therefore the same code everyone else is running
This makes it possible for a security researcher to read the code on GitHub, publish their results (with inspected GitHub commit) and for everyone to verify that they're running that same code.
(*) Or a warning, error, or your computer has been hacked, or (for the second guarantee) if both the server and GitHub have been hacked. However, notably, the guarantees hold true if the server has been hacked or gone rogue. If the server has been hacked, the hacker also needs access to GitHub to update the code for existing users. And if the developer has gone rogue and pushes malicious code to the server, he also has to push that same code to GitHub, which makes it possible for observers to detect his rogueness.
Furthermore, we would like to guarantee that a security researcher can independently verify that all users are running the same code, as long as the server was not malicious when the users first opened the web app.
However, currently, the server tells the Service Worker which commit on GitHub each file is coming from. This is necessary, because it's hard to guarantee that the browser always gets the latest version on GitHub, due to caches and using a CDN. However, an attacker could hide a commit on GitHub, for example in an old branch. To solve this, the Service Worker could check that the commit is in a specific 'release' branch, and either the latest commit or no more than 24 hours old. We plan to implement this but haven't done so yet.
Service Worker lifecycle
The designers of the Service Worker specification have taken great care to make sure that a web application cannot permanently break itself using Service Workers: the browser checks for an update to the SW on every page load, and we can't prevent the SW from being updated if it has been changed on the server. Fortunately, both the old Service Worker and the web app get notified when that happens, so we can notify the user of the update.
However, we can't actually check that the new Service Worker matches the version on GitHub. We can request the new Service Worker file, of course, but (at least in Chrome) that issues a separate network request from the request that Chrome itself used to get the Service Worker, so we can't be sure that the server replied with the same file.
It would be nice if Chrome could put the Service Worker file it got in network cache, so that the old Service Worker can request it from there. Then, we can stop showing the relatively unhelpful warning above.
Service Workers also update on Push Notification and Background Sync events. We currently use neither, but it's worth verifying that browsers starts the old Service Worker for the event (while updating asynchronously) and sends it an update event, which is what we want. It's also worth verifying that there's no other way to trigger an update, especially without starting and notifying the old Service Worker, since that would be a vulnerability for our use case.
Previous approaches to Transparent Web Apps
Previously, some (including Airborn OS) have been using browser addons to increase the security of their web apps. The obvious disadvantage of this is that users have to install an addon. Also, you can't install addons on mobile Chrome and Safari.
I've previously written about another approach using Certificate Transparency. Basically, the idea is to put checksums of the files of the web app in the website's certificate. Thanks to Certificate Transparency, we can check that there's only one certificate for the website (and by extension, only one version of the code).
The advantage of that approach is that it's less trust-on-first-use: it can protect you even the first time you open the web app on a computer. The disadvantages are that it's less flexible (because it's implemented in the browser or an addon instead of in the web app), requires more work or tooling to update the web app, and requires work on the part of browser makers to implement. The solution using Service Worker, in contrast, can be deployed today.
Next steps
There are many web apps that would benefit from being made transparent: encrypted chat apps, BitCoin wallets, but also client-side tools such as word counters and photo editors. Let us know if you're a web app developer and need help implementing something like this. We would also like to create a library (maybe with the help from other web app developers) to make that easy. Let us know if you want to help or have any feedback!