Twitter and email have a lot in common: they’re both ways to communicate over the Internet, they both center around sending primarily textual messages but support including other media, and they’re both used by hundreds of millions of people every day.
But they have one huge difference: Twitter is owned by a single company, and has caused that company to be worth about twelve billion dollars.[FN: 12.65 billion as of this writing, but what’s a few hundred million dollars among friends?] Email, on the other hand, isn’t owned by anyone and thus isn’t directly making anyone money. (That’s not to say, of course, that companies aren’t making money off email services (Gmail, Outlook, et cetera)—of course they are. It’s just to say that these companies are making money off their implementation of email, but not off email itself.)
Another way to come at this distinction is by focusing on exclusivity. If I wanted to start a Gmail competitor tomorrow, my very first users could send emails to every Gmail user (and everyone else with an email address). But if I wanted to start a Twitter clone tomorrow, my first user could only tweet to themselves—they wouldn’t have access to Twitter users unless and until Twitter decided to let them.
All of this is a long way of saying that Twitter is a company, while email is a protocol. Why?
I have a theory. But before I get there, I want to talk a bit about one popular explanation that I think misses the mark.
One popular idea is that the way Twitter ended up—as a closed system controlled by a single company—is the normal state of affairs in a capitalist society: someone invents a thing, and then owns that thing and profits from their control of that thing.
According to this theory, the only reason that email isn’t like Twitter is that email was first developed by government and nonprofit groups, who had no interest in establishing that sort of control. Under this theory, if Microsoft or some other company had invented email, then they would have treated it just like Twitter, and email users would be locked into a single platform today in exactly the way Twitter users are.
This theory predicts that the “next Twitter” will play out just like this Twitter did: whoever intents it will keep it proprietary and run it as a closed system. At least, that is what will happen if the inventor of the next Twitter is a profit-seeking company. To avoid this outcome, some believers in this theory participate in open source projects or find new ways to finance innovation. Indeed, one of the main (ideological) motivations for Initial Coin Offerings “is to fund the development of a network that will one day exist without you”—that is, to fund the next Twitter while credibly promising not to have control over it.
I disagree with this pessimistic take. I think that there’s a good chance that the next Twitter will be much more open than Twitter, even without government involvement, open source code, or funding by an ICO. All it needs is some good copycats.
To see what I mean, lets play out how the actual Twitter would have developed if it had faced a bunch of copycats from the very beginning. Imagine that seven Twitter clones had sprung up shortly after Twitter got started. As first mover, Twitter keeps an edge and winds up with 30% of the users, while the other seven have an average of 10% each.
At first, this balkanizes the user base, since each company is trying to build a proprietary platform and is only letting their users tweet at other users of their platform. This is bad for Twitter users, who can only tweet at 30% of the overall user base, but it’s really bad for the users of the Twitter clones, who are limited to 10%.
What happens then? I’m guessing that some of the clones would have banded together and agreed to open up their protocols, at least to each other. If two of them band together, they would have access to 20% of the market, and become much more attractive than the other clones, giving the others a reason to join this new consortium. Once four or more band together, they have more of the market than Twitter itself. Eventually, even Twitter would want to join the consortium.
Play this out long enough, and it ends up in the same situation as email: with a shared protocol that all companies can use. Companies could still make money in this model, but they’d make it from the services that they add on top of protocol, not from ownership of the protocol itself.
Note, however, that this only works if Twitter faces early challenges from multiple clones at the same time. If Twitter is only ever challenged by one company at a time, then that company doesn’t have anyone else to band together with to get the momentum started for an open protocol (and Twitter wouldn’t agree to give up their ownership of their protocol without good reason.)
And if Twitter is able to fully establish dominance, then the challengers won’t have the user base to compete, even if they band together under an open protocol. For instance, if Twitter has 80% of the users, then the clones can be as open as they want with each other, but their users will never have access to the bulk of the tweeting people out there and Twitter can probably write off 20% of the users without harming their platform. (Especially since that 20% will start to shrink as people jump ship for Twitter to have access to more users.)
That gets us to the final question: Why wasn’t Twitter deluged by copycats? After all, Twitter is now worth 12+ billion dollars—if a copycat could have captured even 1% of that, you’d still be talking over a hundred of million of dollars.
And that gets me back to the title of this post: Twitter didn’t face copycats because it seemed kind of dumb. Or, put a bit more exactly: the use case for Twitter was not obvious to many of the people who would have started or funded copycats.
Twitter was entering a world that already had email, blogging, RSS, and Facebook. If people wanted to get their ideas out publicly, they had a ton of ways to do so. In fact, if Twitter was different from those services, it was just by letting other people sign up to follow individual users (which, to the extent that it wasn’t like Facebook, seemed a lot like old-fashioned bulletin-board mailing lists, which were already on their way out). And Twitter’s other distinguishing feature was limiting users to 140 characters—which must not have seemed like a great feature at the time. After all, people could already send short emails or make short blog posts; if they were choosing not to, it must be because they didn’t want to be that brief. Why would enforcing length limits make the platform more attractive?
My point is not, of course, to defend any of these points—with the benefit of hindsight, we can obviously see the value that Twitter provided. Instead, my point is that Twitter’s value wasn’t obvious. If it had been, then Twitter would have faced enough copycats to make sure that we ended up with an open tweeting protocol.
Given that, I feel fairly sanguine about the chances of the next Twitter ending up with an open protocol. Sure, we could have another Twitter where the use case isn’t obvious until it’s too late for the copycats. But I think it’s far more likely that the next Twitter will have a more obvious appeal, will face copycats, and will end up with an open protocol.