Credit: Mimi Phan / SodsaiCG / Shutterstock
Ideas Made to Matter
Openness, control, and competition in the generative AI marketplace
By
Few question whether generative AI is a transformative technology. Yet there is uncertainty about how the market for it will evolve compared with previous radical advances — and whether that evolution will require a new approach to strategy and to competition policy.
Tight control over access to specialized infrastructure and capabilities will likely result in a concentrated generative AI market controlled by a few key players, according to a new working paper by MIT Sloan professor and co-authors.
Yet there is still room to influence the future of generative AI, write Azoulay and co-authors Joshua L. Krieger, PhD ’17, at Harvard Business School and Abhishek Nagaraj, PhD ’16, at the University of California, Berkeley. Azoulay and Nagaraj are also researchers at the National Bureau of Economic Research, which published the paper. The work of exerting such influence starts with dispelling myths about strategy in technology-intensive industries, the researchers write. One belief is that the traditional competition rule book can be discarded in the case of generative AI because it is so transformative.
“I’m not in denial. Generative AI will have profound effects across many sectors of the economy,” Azoulay said. “But with each technological revolution, experts always arise to claim that the rules of competition have been upended. … It’s not instantly obvious why that would be true.”
Another myth is that the appearance of open-source foundation models (such as Meta’s large language model, Llama) will lead to “an entrepreneurial free-for-all,” Azoulay said. In fact, dominant firms will likely maintain control over critical pieces of infrastructure. So, what does affect the competitive environment shaping generative AI? There are two key factors, the researchers write. The first is appropriability, or whether firms can prevent the knowledge necessary to architect, train, and serve AI models to “leak out” beyond the boundaries of their organizations.
The other factor is complementary assets — whether market entry requires access to specialized infrastructure or capabilities. These are not new concepts, Azoulay said: Scholars of technology markets have been analyzing strategy and competition through this lens since the work of David Teece, a UC Berkeley economist, appeared in the mid-1980s.
Openness, by itself, does not dissolve monopolies
AI experts have quipped that the only thing open about OpenAI, perhaps the leading AI firm in 2024, is its name. The company has shrouded important technical details in secrecy, including model size, hardware, training compute, dataset construction, and training method.
Users may be able to access GPT-4, Anthropic’s Claude, or Google’s Gemini models through application programming interfaces. But without knowledge of the weights that a model assigns to the hundreds of billions of parameters used to make a prediction, they cannot replicate these models, probe their limitations, or improve upon them.
In this context, one factor that did lead to optimism was the emergence of a vibrant “open source” ecosystem, which was kick-started by the leak of Meta’s Llama model in 2023. This allowed researchers who might not have the resources to train such large models from scratch to experiment with and improve upon a state-of-the-art large language model. Soon after, an internal Google memo titled “We Have No Moat, and Neither Does OpenAI” argued that the large and well-resourced leaders who pioneered the field using proprietary approaches would lose out to the nimbler open-source developers.
Yet the authors cast doubt on the idea that the mere existence of an open-source developer ecosystem will curtail the market power of leading technology firms.
AI Executive Academy
In person at MIT Sloan
Register Now
A stranglehold over complementary assets
Openness is not enough because viable commercial offerings in the AI domain require access to more than knowledge about model architecture, training methodology, or even model weights. Capturing value from AI innovation also hinges on access to specialized infrastructure and capabilities.
Azoulay and his co-authors mention six of these complementary assets while acknowledging that their list is not necessarily exhaustive:
- The training compute environment
- Model inference capabilities and hardware
- Access to massive quantities of nonpublic training data
- Benchmarks and metrics to assess model performance
- Safety and governance capabilities
- Data network effects (whereby users’ engagement with a model generates information that dynamically improves its performance)
Even if access to the technology becomes more open over time, leading tech firms are likely to maintain quasi-exclusive control over these assets, owing both to the massive investments they require and the early lead they have established. New entrants in this domain will likely need to rent access to both infrastructure and capabilities if their value proposition is to appeal to potential customers.
The infrastructure costs of model training constitute a prime example of tightly held complementary assets. To give an idea of the scale of the investments required, Meta CEO Mark Zuckerberg announced in January that in its effort to train the next generation of its large language model (Llama 3), the company expects to have built a massive compute infrastructure that includes 350,000 of Nvidia’s H100 graphics processing units by the end of 2024. Valued at the retail price of this crucial piece of equipment, the GPU investment alone would amount to approximately $10 billion.
Other complementary assets could prove just as essential to attracting paying customers, beginning with AI safety and governance expertise, Azoulay said. “If you’re only experimenting in the lab, safety isn’t a big concern. Once a model is on the market and people can do good or evil with it, then it becomes a concern,” he said. “If you need access to these complementary capabilities, the only games in town are the big players.”
Will the generative AI industry take a platform turn?
Because complementary assets are so scale-intensive, the authors suggest that the generative AI sector might soon evolve into a platform structure, where a handful of firms control the foundation model layer but applications can be developed by a wider diversity of actors, including startup and academic teams.
Related Articles
The smartphone industry offers a template for imagining this future generative AI marketplace, Azoulay and his co-authors write. Only two technology firms control mobile computing operating systems — Apple, with iOS, and Google, with Android. Seemingly aware that they do not have a monopoly on imaginative ways to make mobile devices useful, these two behemoths have nurtured and curated an ecosystem of application developers.
A similar dynamic could soon unfold with generative AI tools, the authors write. Third-party developers could customize Bing or ChatGPT for a variety of consumer and enterprise use cases, without the need to build a model from scratch or replicate computing infrastructure. Researchers, startup founders, and other independent developers “can bring their specific knowledge to bear. They don’t have to wait for Google or Microsoft,” Azoulay said. “That would allow a flourishing of different applications and increased use of those models in different contexts.”
The downside of generative AI following a similar path to mobile phones is that development happens on terms set by the firms that control the foundational models. These firms make decisions such as which applications are allowed and what share of revenue each third-party developer can claim, Azoulay said, noting, “That’s a lot of power in the hands of a very small number of companies.”
Averting an oligopolistic future for the generative AI industry
Though generally pessimistic regarding the prospects for a competitive marketplace, the authors write that one major generative AI player is “going rogue” by leaning into genuine openness. After Meta’s large language model Llama was leaked online in March 2023, the company opted to double down on a more open approach than its competitors. In a recent open letter, Zuckerberg described this approach as “the path forward” and derided Apple for “the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping.”
Openness seems to be a sensible step for Meta. As the authors write, “Once a model’s inner workings are exposed for all developers to see and tinker with, incumbent firms may not be able to wind the openness clock back.” It also gives Meta a leg up with academic and research communities, which is important given that private industry now dominates AI research.
Yet it’s asking a lot for the industry to place its hope for open AI on the shoulders of one leader who could change his mind, just as Google has by tightening access to the Android platform. “Meta could potentially alter the terms of access, especially if the economic or strategic benefits of the open approach turn out to clash with its broader corporate goals,” the researchers write.
The role of competition policy
The authors stop short of recommending robust government interventions into the fledgling generative AI marketplace. Rather, they advise that policymakers treat generative AI like any other nascent industry, suggesting that there’s “nothing inherently special” about the sector, no matter how transformative the underlying technology is. They are skeptical of antitrust actions that could lock the industry into particular technological paradigms when there is still so much uncertainty about the frictions that will hamper the broad adoption of generative AI tools across the economy.
“What the government can do is lightly put its finger on the scale to create conditions for exploration and experimentation,” Azoulay said. “Watchful waiting is the attitude policy makers should probably espouse in light of all the uncertainty.”
Read next: How to set technology strategy in the age of AI