Ben Thompson’s Stratechery, Part 2
In which the Kool-Aid gets stirred
I was impressed by, and enjoyed, the first two-thirds of Ben Thompson’s response to my critique of his work related to tech platforms and the Google antitrust case. (The last third is more regrettable, but I’ll address that, too.) I’ve enjoyed most of the exchange, and it has helped me clarify my thoughts. In particular, I’ve come to feel that Google’s spending $30 billion on traffic acquisition feels too much like the kind of thing Net Neutrality was trying to prevent back in the 2000s. But more on that later.
Overall I’d say Thompson’s response hasn’t done much to counter my core critique. He is too quick to jump from model to reality, too quick to assume that one important source of advantage represents something close to the full picture. As I said, his work is insightful for understanding tech business models, but when it comes to antitrust, it is on much shakier ground. And the hardest part for him to defend is the assertion that various costs, including switching costs, are zero or near zero. For doing so assumes away the costs that could be critical in determining strategic advantage, winners and losers, and at some point, liability in a lawsuit.
Let’s focus on the Google case, which, ironically, we agree on — it is nonetheless a good proxy to examine our differences. In case you’ve been living under a rock, the Justice Department sued Google, alleging that it is maintaining its monopoly using means that violate the Sherman Act. Thompson says that as “[Wu] does not believe that Google is unique as far as scalability is concerned, he appears to assume that the company must be doing something nefarious to command such market share.”
I’d turn this around and say that I don’t actually assume that the company must be doing something nefarious, but — but more importantly — I don’t not assume it, either. And that may make for the key difference between us.
Google is unquestionably one of the most impressive companies of the last 30 years, the cream of Silicon Valley’s late-1990s crop. Indeed, for much of my early career, I was accused of being too big a booster of Google.
But despite its origins as an impressive company with an undeniably great product, life is long, products are dynamic, and 15 years into Google’s dominance, we face an open question. That is whether:
A: Google search gained an advantage due to superior technology, user experience, aggregation theory, having the smartest coders, or whatever, and has continued to hold its advantage due to that reason; or
B: Google gained market leadership that way, but has subsequently come to rely in significant part on methods that include paying for traffic through defaults, acquisitions, subtle increases in switching costs, and other techniques having little to do with pure quality.
Both A and B are plausible — so which is true? The answer is: It is an empirical question. That is a question of fact, not of theory. And that is why I think Thompson’s aggregation theory, even if it does a good job explaining the initial advantage, isn’t helpful for telling us whether we now live in World A or World B. In fact, aggregation theory is positively unhelpful to law enforcement, to the extent it is read to insist that we must be living in World A because of zero switching costs. That comes close to assuming away the question, or stated another way, elevating theory over facts.
What we really need to settle the A/B question is not theory, but evidence. Which leads us back to a key evidentiary question: Just why is Google willing to pay Apple and other distributors so much for “default” placement of its product (and relatedly, why they haven’t settled the case, and stopped paying)? According to its 2019 10K, Google paid out over $30 billion to acquire traffic, much of which seems to have been paying for such defaults in various forms.
So what was Google paying for?
The funny thing is, if you have what people want, you don’t usually pay for distribution: people pay you. The NFL doesn’t pay NBC to carry Sunday Night Football, and George Clooney does not pay to appear in movies. Knowing nothing else, you might then expect Apple to be paying Google if it has such an intrinsically superior product. That would follow if aggregation theory is 100% right about the quality of the user experience creating a true winner-take-all situation.
Here are three possible explanations: A, B, and C. Explanation A, the most benign, is that Google has an inherently superior product but still thinks it’s worth paying tens of billions for a default placement because being the default is worth it for reaching more customers and increasing its revenue. In other words, unlike George Clooney, Google can make money by paying to be in movies.
Explanation B is that Google maybe makes more money, but is also afraid of a competitor arising on the iPhone or elsewhere — especially a Microsoft or Apple competitor. It is therefore willing to spend billions to avoid that competition and the corresponding loss of revenue. That’s anticompetitive.
Explanation C is partial, but posits that the core deal, the Apple deal, was actually driven in large part by Apple’s market power, and that Apple and Google have colluded to divide up the search market. The deal is this: Google gets freedom from an Apple competitor, but Apple gets billions in return. In this story, they are splitting the monopoly proceeds.
I think it is hard to rule out A, B, or C on the evidence we have. Indeed, figuring the best fit is what the law and courts are supposed to do, and some of the best evidence may reside in sworn testimony or an email chain somewhere. And there may be partially overlapping reasons.
But what isn’t particularly helpful to answer this question is Thompson’s aggregation theory because, again, it is an idealized model of how Google works, not the reality, and it assumes too much away. That’s why, as I said, I find Thompson’s work useful for learning and thinking about digital markets, but not so useful for answering core antitrust questions, like this one.
This brings us, finally, to switching costs and the influence of defaults on user behavior. These matter because if there truly were zero switching costs, it suggests that the only reason for Google’s persistence is quality. Thompson, if I have him right, thinks that the switching costs related to Google search are zero or near-zero because his theory presumes that transaction costs are zero, which leads to some of the “competition is a click away” stuff. I’d say, like a broken record, that this is another empirical question, and one unlikely to be settled by either theory or the anecdotes of technically sophisticated users (“Hey! It wasn’t hard for me to switch”). Indeed the latter is maybe the worst form of evidence, akin to saying, “none of my friends voted Republican, so what’s going on here?”
One thing neither Thompson nor I discussed in our first exchange was the effect of defaults on user behavior, but they are very relevant to switching costs and also what is at issue in the actual Google case. Numerous studies suggest that consumer defaults are highly “sticky,” in various contexts — that is, very few people bother to change a default (here’s an example about 401(k) plans). That’s relevant because defaults can heighten the effect of even very small switching costs.
I would agree that what you might call the literal costs of switching are low for a user with even basic technical skills. But that isn’t a full accounting, nor enough to declare zero switching costs. For when you add in the effects of defaults and other factors like access to maps, the coziness of the Google suite, and old-fashioned brand loyalty, I think you end up with a package that tends to discourage switching. And why not? Google wants, like any company, for people to stay with it, even if it doesn’t quite want to try to imprison people in the style of your friendly neighborhood cable company.
Some readers, and Thompson, were confused as to why I brought up Gmail. I think the suite of products that make up the Google ecosystem also have some nonzero effect on switching costs. It is common to try and maintain user loyalty with free goodies and perks, and doing so invokes the norm of reciprocity. That’s the sense that with Gmail and other gifts, you are getting something from Google and are indebted, an aspect of human psychology that user interface designers often rely on. There’s more: The user needs to know that you can actually adjust the default; there’s subtle mental load switching between Gmail to Bing, missing out on Google Maps, and even simple brand loyalty — all of these factors, I think, keep people in the Google family and make switching a cost. Does that mean it should be illegal to ply people with great add-on products? Obviously not, in most cases. But that’s not the question: It is whether something other than search quality helps keep people with Google. And that cannot be answered by pure theory.
All this said, it may still be the fact that it is the quality of Google search, and not the extra goodies nor the money spent on defaults is what keeps people with Google. (And the defaults, by the way, could still be anticompetitive, even if quality plays a role in keeping people with Google as Brett Frischmann points out). But at core we have a contestable empirical question, not one answered by saying things about how transaction costs and marginal costs are zero.
Let me conclude this section with some of my own thoughts on Google and its market position. Writing this entry has made me more critical of Google, for one big reason: that $30 billion spent on traffic acquisition. One of the ideals of the open internet and Net Neutrality, circa 1995 or so, was that the internet ought to be a kind of level playing field — let the best site or app win. In those days, the feeling was that a quasi-academic search out of a garage with a great new algorithm could beat out an established giant (DEC’s Altavista). Nothing is perfect, but I did feel like the 2000s was fairly meritocratic: Google beat Altavista and Yahoo!, Facebook beat Myspace, Prodigy lost out to everyone, mainly because of better code.
In that context, the idea of Yahoo! paying Verizon broadband to guarantee a win over Google (which was actually better) felt ugly to me; the entrenchment of what was, at the time, clearly an inferior product. But here, in the 2020s, the $30 billion a year being paid out by Google feels a lot like Yahoo! paying off Verizon in the ’00s. Spending that kind of money has strongly entrenching potential. It feels like what we were trying to prevent with Net Neutrality and I don’t like it one bit.
As I’ve said repeatedly, I’d like Google to be the company it could be, for it to prove its merits, on the merits, without the side payments/bribes. Maybe that’s too hopeful or naive, but there you have it: I’m an idealist at heart.
On the positive or constructive side, Thompson and some of his readers suggested I didn’t pay enough attention to his distinction between a “platform” and an “aggregator,” so I’ve spent some time thinking about that. The question is, what is gained by the distinction? Or maybe to ask the question differently, does the distinction, if it does matter for business model purposes, matter from the perspective of an antitrust enforcer who is interested in the willful creation of barriers to entry?
My first instinct was to think that the semantics of the word “platform” were getting in the way. (It has a technical meaning, as in “the Windows platform,” but also a theoretical meaning in the two-sided markets literature, as a meeting place between buyers and sellers.) But what Thompson means as the distinction between a platform and an aggregator is slightly different, and idiosyncratic to him, though it does often relate to the technical, enabling function. For him, a platform provides tools, usually technical, to allow suppliers or producers to reach users, and then becomes the only way to reach that producer in that form. And so Windows and other operating systems (CP/M, Unix, iOS, Android) are all platforms. A web browser is, presumably, a platform, too. The suppliers, or the apps, depend on the platform and can’t do much without it.
In contrast, says Thompson, Google Search doesn’t play any role in enabling its suppliers: Its value lies in making it easier for people to get to stuff that is, in some sense, already out there. Other aggregators, according to Thompson, include Netflix, Uber, and the third-party parts of Amazon. They all, according to Thompson, aggregate stuff that is already out there. There are other ways of getting to websites, reaching drivers, watching movies, etc.
So what I like about Thompson’s effort is that it makes an interesting distinction between two types of business models that are often grouped together as just “platforms.” It does seem to be getting at something. If I were running a business, I’d want to understand the difference well. It might also, as Thompson suggests, help us understand what we want from regulation or antitrust action in this space. And disappointingly (for those interested in a fight) Thompson and I agree on certain things, like the dangers of control over and abuse of an exclusive monopoly platform. Thompson also seems to suggest that you should prevent aggregators from buying each other, and I agree with that too.
But here are some of the problems that I see, first in the distinction itself and second as a guide to law enforcement.
First, there are an awful lot of entities that fall somewhere in the middle, once you get away from the canonical examples of Google Search (aggregator) and Microsoft Windows (platform). It feels like you can often argue about which category something falls into. (Thompson makes a distinction between level 1, level 2, and level 3 aggregators, which has to do with the costs of gaining a supplier, but I don’t think this fully settles the problem.)
So iOS is a platform in Thompson’s language because you cannot publish iPhone apps without it. But then the App Store curates stuff and helps you find things you want using search. Thompson refers to the App Store as an aggregator. But does this mean that if you add a search function to any platform then it becomes an aggregator, too? If Bill Gates had, in the ’90s, thrown up an Application Store on Windows would he have been both an aggregator and a platform? What does that tell us about the antitrust action then?
Similarly, Thompson describes YouTube as an aggregator (level 2 or 3, I think) but it has obvious platform tendencies too. The producers of videos need to set up their channel, upload to the YouTube interface, and then typically go try and find audiences for their channel. You can’t get to a YouTube video other than through YouTube — they aren’t “out there.” On the other hand, YouTube does help people find the videos they want through its search function and some algorithmic curation. So once again we seem to be in a gray zone, with much dependent on adding a search function.
In this distinction, I’m reminded of Marshall McLuhan’s differentiation between “hot” and “cold” media (movies and radio were “hot,” telephones were cold, and so was TV for some reason) which always felt like it was getting at something profound, but it could sometimes be hard to be quite sure what. And while it often seemed like you could debate the point, McLuhan would authoritatively pronounce every medium one thing or another.
Another challenge for the distinction (which maybe doesn’t matter to Stratechery readers) is whether it does useful work outside the world of digital markets. For example, are credit card payment systems (a two-sided platform, in economic terminology) one of Thompson’s aggregators, platforms, or something else? Well, they seem like aggregators in some ways because they aren’t the exclusive means to reach consumers, and a business can exist without the credit card network. The stores would be there whether or not the credit card companies were. But they don’t exactly help you find stores, curate them, or bring them to you. So what are they, then? And if the answer is that the model is limited to digital markets, well, what’s a digital market? A firm like Uber is at least halfway online, and after all, you can use credit cards to buy things online.
After a while, I started to think that the match-making function of the aggregator is what matters most, at least for Thompson’s model. (Another writer, David Evans, has a book entitled Matchmakers that captures this idea well.) Thompson’s aggregators might be valuable because they specialize in helping people find a match for what they want. But if that is what we’re speaking of, I don’t buy the winner-take-all stuff. Seen this way, a real estate agent might one of Thompson’s aggregators — there’s a large inventory of apartments or houses out there, and the agent is one way of helping you find a good fit. Yet there are thousands of real estate agents. Perhaps Thompson will say that this is where zero marginal costs makes all the difference. Anyhow, my point is that we need more than this distinction to understand how contemporary multisided markets really work, which I will admit is no easy thing to get your head around.
Who knows, perhaps Thompson has answers to these questions in his archives somewhere, and as I said, the distinction is interesting. But for antitrust enforcement, there’s a different question: What turns on the label? Stated otherwise, should the label matter to law enforcement?
Yes and no. It does make sense for law enforcement to understand digital business models better than they sometimes do. And I think Thompson might be onto something when he suggests that the concerns related to aggregators are horizontal (I’d say, you really don’t want or need an aggregator disabling horizontal competitors), while those related to platforms may be vertical (you don’t want them messing with adjacent markets). Yet at the same time, they both can have power, and both seem capable of vertical and horizontal harms, so I wasn’t sure how far this went.
What else? Here is my big problem with the prescriptive side: When Thompson says “whatever an Aggregator chooses to do on its own site or app is less important, because users and third parties can always go elsewhere, and if they don’t, that is because they are satisfied.” As I’ve said, I this conclusion that everything depends on quality for these so-called level 3 aggregators I don’t buy, unless you accept certain assumptions as fact, which I don’t.
Finally, I think it is notable that many of the firms that Thompson describes as aggregators are in competitive markets, Netflix being a good example. Yes, it is a so-called level 1 aggregator, and its supply is expensive, but I still think it is possible to have so-called level 3 aggregators in competition with each other, especially because I think a pure level-3 aggregator as Thompson describes may not exist.
History may not be the favorite topic of Stratechery readers, but try this on. About a century ago, the theory of “natural monopoly” was very popular. It held that some industries were just destined to be monopolized, and would be more efficient that way anyhow, and also more virtuous. There were different reasons, like network effects, high fixed costs/low marginal costs, and so on, but both regulators and economists were quick, at times, to assume that competition could not survive in certain industries.
There may be some industries that are, indeed, natural monopolies (like plumbing). But the problem was that the theory of natural monopoly became a prescription, and was used too broadly to insulate too many industries from competition. A good example is the telephone industry and its great monopolist, AT&T. There was, and are, economic arguments for one telephone company: network effects, low- or zero-marginal costs, and so on. But the theory became a self-fulfilling prophecy, leading to a 70-year Bell monopoly that, by the end, was extraordinarily stagnant and anticompetitive.
When you looked more carefully, it turned out that even if there was a natural monopoly in local phone lines, it didn’t follow that there needed to be a monopoly in the sale of telephones themselves, attachments like modems, long-distance services, or online services (that is, what were later called ISPs). AT&T, however, said these all needed to be monopolized for the integrity of the system. But when AT&T was broken up, low and behold, there was competition and innovation in all kinds of places no one expected, yielding, in time, entire new markets, like “the internet” and mobile telephony.
So that’s the history lesson (the full version is in my book, The Master Switch: The Rise and Fall of Information Empire). Here is why it is relevant. I repeat for about the sixth time that understanding “level 3 aggregators” is no mistake for someone trying to understand the business of tech. And Thompson is not blind to the idea of anticompetitive conduct. But the AT&T example shows why I’m resistant to aggregation theory as a regulatory guide. It is at risk of becoming yet another natural monopoly theory (YANMT, for the acronym-minded). Ultimately, it doesn’t really tell us enough about whether a given market could be competitive, and what other factors might be yielding entrenchment.
To return to my mantra: If you, Mr. Thompson-level 3-aggregator, really are that good, it is the job of the antitrust law to make you prove it, without the steroids.
The last part of Thompson’s analysis is not worth much comment because it strikes me as too twitteresque and squabblly. It is true, however, that I do bear the burden of apology because I definitely did say some impolite and field-defending stuff on Twitter.
Just one corrective. The “false confidence” I am concerned with is on the part of Thompson’s readers, not Thompson himself. I do think Stratechery is valuable, as I’ve said over and over again, but that over-reliance on it is not to be advised, especially if it discourages readers from further digging and getting into other sources. In other words, go ahead and drink some Kool-Aid, it does taste good! But seek a balanced diet as well. I’ll leave it at that.