[<< | Prev | Index | Next | >>]

Thursday, July 24, 2014

Refactoring the Internet: Taking Down the Walls



Approximately a million years ago, I was involved in a start-up whose only material product was a half inch thick business plan for Second Life. It wasn't called that, and has no genetic relation to Second Life (that I know of), but in most of the details that's what it was, virtual currency and all. We pitched this idea to a few potential sources of funding, and the one that took a serious interest was the news company Times Mirror. Right about the time the ink was drying on the first check, Times Mirror had a company-wide meeting in which they declared that the internet was never going to catch on. And so they unilaterally cut all internet related pursuits including ours. Their rep already had a plane ticket to bring us the check, so he came up anyway just to say nice try, goodbye.

Maybe it wasn't quite a million years--it was 1994.

When we were first hashing out that plan, it included buying banks of modems for people to call in and connect to our service. That's how it worked back then. When you wanted to use AOL, you dialed into an AOL modem. The content and the means of access were bound together, and for the most part nobody questioned that because nobody had seen otherwise.

Sound vaguely familiar?

But that model was just too poorly factored in terms of cost and flexibility since every content company (like us) had to individually provide compatible access for every potential user--with an ever evolving pool of dial-in modem protocols, NSFNET connections from governments and universities, and so on.

And while the content companies loved having their captive audiences, the users didn't like being captive--especially because those users weren't just consuming content, but providing it too.

So looking forward, we projected that this couldn't last. The access and the content could not stay forever bound together. So we declared the AOL model doomed to fail, and scrapped that chapter of our business plan, betting instead on network access becoming a commodity by the time we would be ready to roll. (One of many bets we would have won.)

Fast forward to today, and we're basically in the same place again. But instead of the entrance through the castle wall being a dial-up number, now it's a URL.

Once upon a time, URLs were lightweight things--just names of items like a single document or a single photo. But with the advent and evolution of AJAX, a single URL is now a gateway into a whole ecosystem, very much like a modem used to be. And those ecosystems are, by and large, walled off from each other just like they used to be, and largely for the same reasons: Because the first providers spent a lot of time and money developing these gateways and they don't want to share them (it's their competitive advantage). And relatedly, because once you have a captive audience, you want to keep them that way (if your aim is to make money). In a sense, Facebook is the new AOL.

When we evolved out of this stage before, it wasn't the change in who owned the modems that mattered, it was the adoption and standardization of data exchange. Transport standards like FTP, SMTP, IRC, and of course HTTP; data description standards like JPEG, MIME and HTML. Some new, some old, not a wholly elegant bunch but between them all they roughly covered the gamut of functionality people expected at the time, and then some.

In short, it was an evolved, public, defacto schema that re-factored the internet from a hodge podge of dissociated communities (numerous BBS's, Compuserve, AOL, etc) into a set of intercommunicating applications (inter-compatible clients and servers for email, chat, web, etc.), from access-centric to data-centric.

But now, that public, shared schema is not keeping pace with the data and applications, and so interoperability is waning to the point where we are back to "dialing in" to a small number of dissociated walled gardens. We have crossed over the tipping point where even the old schema is atrophying through disuse and we are slipping backwards (in terms of losing interoperability that we once had).

This will change. I don't claim to have the exact address of where it's going, but I have a guess at what the road map looks like. And while you may expect I'm going to talk about ontologies like FOAF, I think the problem starts much lower down. After all, if we managed to conjure up SMTP, HTTP, IRC and the like, you would think by now we could have come up with SNIF (social network interchange format). But that's not really the problem.

One of the most successful schemas is so ubiquitous we forget it's a schema: The file system, with its chunking of data into file objects named by hierarchical paths, and with access permissions that say who is allowed to see what. It's a very shallow schema because it says little about what is in the files, save for the filename extension convention (as in "banana.jpg"). But this simplicity also equals generality which is part of why this mini schema has served for so long. More particularly, it is one self-contained part of an open hierarchy of schemas, since other schemas like JPEG and SQL rest upon it, just as many application-specific schemas rest on SQL, and so on.

And for a very long time, even well into the internet era, "the" file system was a place we could keep all of our digital information--our photographs, our text documents, our calendar, our email, and so on. But over time computers have evolved from being large and few to being small and many, and in the process, "the" file system has vanished, and has been replaced by "the cloud"--which is to say, it has mostly just vanished.

Similarly, we have vanished, in the sense that once upon a time, at least in the unix world, we had (roughly) a single identity that bound all of our digital services together. The same identity (and password) that let us access our files also was our email address, our talk handle, even our homepage. But now, every digital service we use comes with a new identity. And a new file system.

Today's walled gardens are attempts to re-capture that lost functionality, to bring the convenience and solidity of "one identity, one file system, many services" to "the cloud". The reason we don't have SNIF is not because social networking is markedly harder to cooperatively share than email, but simply because we were already evolving back toward walled gardens by the time digital social networking was catching on. To move forward from here, we need to restore the foundations, and the rest will follow.

Roughly, we need: a way to create and authenticate a universal identity, a way to store our data and information privately but accessible by us from anywhere, and a way to share some of our data and information with others. Within a single machine, historically, these functions have been provided by the operating system, and so in a sense I am calling for us to start thinking in terms of defining an operating system for the internet. Not a big program that runs everything, but a low-level schema, or set of APIs, which provides these basic services to applications in the context of a distributed network of computers.

OpenID has made a stab at universal identity, but it's more of a hatchet job since most people already have (without knowing it) multiple, distinct OpenIDs, which each are furthermore tied to a particular company (such as myspace) and high-level protocol (http). Just, eww.

What we need for universal IDs is something that comes into existence only because we make it (generally just once, but perhaps more if we wish to maintain personas or whatnot), and which we can keep forever no matter what, ideally without the need for any central authority (handing out unique IDs or whatnot). This is a non-trivial problem, but I imagine it could be solved to a reasonable compromise of reliability and convenience. One approach, for instance, would be to generate the ID as a cryptographic hash of some identifying record. If this is to be your legal ID, it could be your birth name, date, country, and additional legal identifying information applicable to your country--whatever it takes so that even lacking any digital keys, you could still prove you own that ID. Another would be that the ID itself is opaque--that is, not universally unique at all, but merely the handle by which you refer to yourself. In either case, you would then associate a public key with your ID, with either some trusted institution like your bank signing that association, or a sufficiency of your friends, or both, depending on the level of proof required. Note that said bank could later vanish from the earth and it would have no impact on your identity other than perhaps you would need to find another signer. In the case of the opaque ID, everybody would have their own ID for you (think of it as the address of your record in their memory), but would know of you by "name" from your public key. The reason it's worth this little indirection is because should your private key ever be stolen or lost, you do not want to lose your lifetime's worth of identity with it. Effectively at that point you file for a "legal name change" where your friends, bank, whatnot, re-appoint your ID to a new public key. For a visible ID, this is a simple change of association. For the opaque ID, it would be "the person previously known by public key ABC is henceforth known by public key XYZ." For most purposes, a trust-network through friends would suffice. For legal uses, trusted institutions would probably be involved. In any event, your ID should not be permanently tied to an email address, website, public key, or anything else that might become invalid in the coming years. Most existing systems fail this test.

As for storage and sharing of information, the closest thing I've seen in spirit is Camlistore. I appreciate their goals, but have many issues with their approach, which I'll footnote below*. Once again, a good solution is non-trivial, but it will happen eventually--hopefully before a bad solution gains too much ground! Thoughts on what a good solution might entail:

The aspect of the file systems of old that we care about here is not the nitty gritty of how the files are stored on the disk, but rather the API by which we access them. Likewise for our internet-wide file system, we don't care how things are stored, we care how to ask and tell about them. On one hand this means we are defining a protocol; but on the other, we are aiming to replace the file system--which means this protocol needs to either be, or at least support, the sort of straight forward object-level access which application developers won't strangle us over. Correspondingly, the core specification should be an abstract API that is both language and encoding agnostic, with particular protocols and per-language APIs being derivative and non-exclusive. I.e., most of the logic in an application should not change if the actual communication protocol changes from JSON over HTTP to Mime-encoded ProtocolBuffers via email. Too many approaches to this sort of problem tie themselves to a particular encoding, transport protocol, or local storage paradigm; these things should be essentially invisible to the application, and so easily swappable for alternatives.

As an application programmer, we want to say things like: I'm told there exists an image with this hash; fetch it for me with low priority and let me know when it's available. Or: I am interested in this chat thread; tell me the last twenty entries, and alert me in the future when it changes. Or: add this item to my set of calendar entries. Chiefly these boil down to a small set of actions, mostly: getting and putting blocks of immutable data, and reading, writing, updating, and monitoring a few dynamic data types such as sets, arrays, pointers, and key-value maps. Roughly the same things that any programming language provides at its core; and correspondingly we can build most anything we want on top of those. That is, a "chat thread" from the APIs perspective is just a dynamic set like any other. The higher level schema that makes it a chat thread is managed and implied by the applications that use it, but the low-level API only need concern itself with the general functionality of dynamic sets.

This sort of layering is as old as computer science, but unfortunately has not been the norm in this particular space. Diaspora, for instance, appears to blend the levels throughout, so that the majority of the code base from root to leaf is tied to their one particular application.

The closest I have seen to the whole-shebang is RetroShare. Unfortunately it is not well documented. And my impression is that it is not a protocol, as I think it needs to be, but rather only a sprawling code base in a particular language, yadda yadda. But at least it looks like they paid some mind to separating the layers, and they do cover a number of relevant aspects which I haven't had the time to touch on here.

For a random example of the sorts of things the task entails: note that in the context of distributed access to a dynamic set with varying permissions, it is possible that one person's view onto a set is different from another's. These are the kinds of new issues that need to be hashed out in general terms, not specific to any application. As an application programmer, writing an agent in this ecosystem, you should be able to say: I am operating under authority of this user; I would like to see the contents of this set and be notified of updates. And you should be able to do that in a call or two, which translate to a message or two passed to the agent you are talking to. That agent in turn may be local to your machine, answering your questions from a local store; or on the other side of the world; and save for a difference in delay getting back to you, it is the same call, and you will get back what the other agents decide you are allowed to see. In principle, if done right, this API would be the new operating system as far as most applications are concerned, and upon that operating system we could evolve the higher level schemas we need to support our messaging, social networking, file sharing, banking, p2p payments, news distribution, and so on.

I think until it is tackled at this (low) level, our gardens will forever be walled off from each other. And think it is inevitable that we will do it eventually.


* My sense of Camlistore is that their lowest level, the blob server, is both too specific and insufficient, for the same reason. The API level most applications are going to care about is the one that deals with whole files as single objects, that understands both immutable objects such as image files or versions of a document, and mutable objects such as large, dynamic sets such as ongoing messages in a public chat thread. The Camlistore blob server deals only with content-addressed, immutable chunks, of limited (sub-file) size, which is one way to provide a back end for such an API but far from the only; and it is not really sufficient even there because any layer above will need to store mutable state... which is why they have a separate, mutable key-value store in their implementation (and it is unclear to me how well they have addressed conflict resolution there if at all). In short, their archetypal representation is too specific to handle everything they need to handle. Furthermore, they do not provide contextual (external) typing of their blobs, nor consistent self-typing, but rather sniff the blob contents in order to recognize meta information vs. raw content. But since the raw content is arbitrary, doing something like implementing a Camlistore blob server on a fuse file system backed by another Camlistore blob server would self-destruct as the latter blob server started obeying commands directed at the former. (Technically it would be the associated search engines that would get confused.) I discussed this all with them a few years ago but they were not swayed.



[<< | Prev | Index | Next | >>]


Simon Funk / simonfunk@gmail.com