When the web as we know it was originally designed the world was a very different place. In this article I look at the world as it was and the world as it is and explore what I think this means for the web.
Old World | New World |
One Machine - Many Users (think Internet cafes, shared home computers, etc.) | Many Machines - One User (think phones, tablets, personal PCs, etc.) |
Connectivity is rare (think dialing your modem) | Connectivity is ubiquitous (if occasionally inconsistent) |
Bandwidth is incredibly sparse (56k modems anyone?) | Bandwidth is plentiful (my cell’s LTE network is faster than my cable modem) |
User machines have little CPU or memory | User devices have tons of CPU & memory |
Attackers were pretty rare (viruses were the big scare back then) | Attackers are ubiquitous |
Services were mostly public and anonymous | Identity über alles |
The consequences of the old world assumptions is that we looked at web clients as being both weak and transient. State needed to live on the server because a user might use many different shared machines and we couldn’t be sure they would be back on the same machine. We also consciously put all the complexity up on the server since we expected there to be proportionally fewer servers than clients.
I believe the source of the biggest change from what was to what is are user devices. Devices are now wickedly powerful and have excellent (if sometimes inconsistent) connectivity. This means that the old client/server distinction is largely irrelevant. We need to think more of a symmetric relationship, of peers. But this also has some pretty profound implications for data management. In a world of silo’d services how each service handles its data is largely its own business. But in a world where many services are run by the same user on the same device we have to treat data sharing amongst services on the same device as a first order problem requiring standardizing a common structured data access and storage model. We also have to accept that users have many devices and so the problem of how to keep state synch’d amongst all the services on all the user’s devices is also a first order problem requiring explicit standardization.
But perhaps an even more profound change is how users interact with each other. In the old web if two users wanted to interact they did this by both connecting to the same service. It was then ’secret sauce’ for the service to decide how it managed the interactions of the users. In the new world order each user has many services on many devices that are interacting with the many services on many devices of other users. This is a true ’web’ with a lot of nodes and their interactivity is explicit, not implicit. E.g. we don’t hide the interactions as the ’secret sauce’ of a single service but rather all the interactions are exposed and explicit across the whole web of users/devices/services. Thankfully we have a lot of strong tools to build on top of including REST, microformats, JSON, etc. But we have to put these technologies together into frameworks that let us build explicit webs of interoperable services.
In the old world we just didn’t worry too much about identity. Most services were anonymous and for everything else we figured that users would just have to log in. In a world where users own many services across many devices and interact with other user’s services across their many devices identity becomes absolutely central. We must have a standardized way for user’s services to identify themselves to each other. There are a number of groups that are trying to do this today but I think they all are held hostage to centralized identity models that are prone to both failure and central monitoring/control, even if the ’centralization’ source is just DNS. I think we can do better and will explain later how.
Also in the old world we just didn’t worry that much about security. Great data sucking machines trying for panopticon levels of surveillance being deployed by entities public and private simply weren’t taken seriously as a concern. Heck, we hadn’t even invented terms like ’zero day’ (although the ideas obviously existed). Alas these are now very real threats and the web needs to take them seriously. This means having a security infrastructure that doesn’t have hundreds if not thousands of single points of failure (e.g. the entire CA model) and one that takes traffic analysis, zero days, etc. very seriously. It also means understanding that there are just too many ways that security can fail (both at the software and user level) so we have to accept failure as a norm and plan a system that is resilient to it.
3 thoughts on “Updating the web”