Code Red: No Moat for LLMs

December 9, 2025

OpenAI seems to be feeling the pressures of competition. A week ago, the company reportedly (and dramatically) declared a “Code Red”.1 OpenAI’s flagship product, ChatGPT, is losing ground to competitors like Google’s Gemini. Given the competition, what does this say about the economic moats within the ecosystem? Do large language models (LLMs) have moats?

With a quickly evolving industry at hand, it can be difficult to determine which areas will ultimately prove to have the most significant competitive advantages and barriers to entry. These economic moats can be easy to identify in hindsight, but we don’t have that luxury with artificial intelligence. Instead, I’ve thought about potential sources of moats for LLMs, and whether or not the products have the characteristics necessary to benefit from those dynamics. This does not necessarily need to be entirely conjecture, either. Observing the current state of the marketplace shows that the industry is highly competitive.

Length of Product Cycles

To start, let’s look at what the length of a product cycle can tell us about the moat of a given company. In short, this period of time is how often a new product is designed, developed, and released. It’s a proxy of how quickly a company must innovate to stay relevant within an industry. 

Industries that have short product cycles, like consumer electronics and fashion, typically have ample competitors and success is often short-lived. As new features are developed by competitors, or consumer preferences change, companies participating in these industries must react quickly. Samsung, for instance, releases new phones multiple times per year, while specific product lines typically receive an updated iteration annually.

On the other hand, some industries have long product cycles, where significant changes may take several years, if not decades, to show up in new products. This can be a reflection of limited competition and high barriers to entry, since incumbents do not need to constantly innovate to maintain their market position. (In fact, if you look at the ultra-long product cycle of widebody jet manufacturers, Boeing and Airbus, you can barely deliver on your current products and still not attract new entrants.) FICO releases an improved model about every five years, and lenders still use older models released over a decade ago.

And what of LLMs? Well, they seem to have ultra-short product cycles:

LLMs have been released at a high frequency. Including the release of GPT-1 in 2018, OpenAI has released around a dozen models in the last seven years depending on how you decide to count unique releases. Competitors have been busy as well. In the last couple of years, there have been four Groks, four Llamas, and six Deep Seeks.2 New features and leaps in model performance are quickly replicated by competitors. I would argue that there is no real moat in the OpenAI feature set. However, this is not enough to say that OpenAI has no moat, since many successful businesses have been built on short product cycles. Legacy software businesses, for example, have been very successful despite needing frequent updates and version releases to stay relevant.

At Least Software Has Switching Costs

Still, software products have been able to develop niche moats built upon their switching costs. It does not matter if a competitor releases new features if it would take your customers longer to switch than it would for you to replicate the idea. Creating customer “stickiness” is how software can be such a good business.

Off the bat, legacy software typically has the advantage of requiring data that is specifically structured for their product. In order to use QuickBooks, as an example, a user must manually upload transactions and configure them on the Intuit platform. This configuration process results in capturing the customer, since the data on the QuickBooks platform is not easily portable to another application without a significant investment in time and effort. Microsoft and Apple use their own file types for high-level applications on their operating systems, which once made it difficult for workflows to switch between the two. This led to process standardization between firms and across industries, (like Windows for finance and accounting, and MacOS for design) creating a secondary layer of network effects feeding into switching costs.

And yet, the key functionality of large language models resides in their ability to ingest unstructured and generalized data and return broadly applicable responses; the ease at which consumers are able to work with the models has fueled their rapid adoption. Unlike typical software products, a user does not need to learn a new interface or how to work with specialized tools. The models are built to consume unstructured data in the form of simple images or text, which is in stark contrast to the typically structured data ingested in legacy software applications. The outputs themselves are also generalized to fit the requisite context — Gemini, Claude, and ChatGPT are all competing to deliver the best response in a format that has a wide scope of applicability.

Aside from temporary differences in performance, there are no forces compelling a user to choose one model over another for a given task at hand. I can just as easily feed a prompt into Gemini that I can feed into ChatGPT. There are not any inherent significant switching costs between LLMs. Even developers who incorporate third-party models into their applications can easily switch between models using aggregator APIs.

ChatGPT Has Brand Value and Habitual Users. Didn’t Altavista?

One line of defense for OpenAI is the value of their brand. It’s no question that ChatGPT has become synonymous with LLMs, and their first-mover advantage has led to persistent mind share amongst consumers despite generally undifferentiated performance. Sure, some models are better at certain tasks, but the relative standings are constantly in flux. Users likely keep returning to ChatGPT out of habit, not because they consciously expect the model to have the best results. This is undoubtedly a source of stickiness. I would argue that this is the only reason ChatGPT continues to have the largest share of the market.

But the combination of brand value and consumer habit alone is temporary — just look at the nascent search engine market. From c. 1996 to 2000, a combination of Yahoo and Altavista controlled the most market share.3 Consumers clearly had the habit of using Yahoo as their landing page and Altavista as their predominant search engine. Yet, it only took Google a few years to take the pole position from their launch in 1998.

Is There a User-Data Flywheel?

One reason why Google was able to compound their share gains was built on their utilization of user data. They could take the searches submitted by their users and the related query results to determine the efficacy of their search engine, which allowed them to improve their product and attract more users. LLM developers know this trick already, and have all sought to take advantage of it.

OpenAI clearly has the most users, so does it clearly have the best data? I am truly not sure. You have to imagine that there are diminishing returns to collecting the data of each incremental user. Where is the point of critical mass, where the amount of user data is effectively indistinguishable in its effectiveness? 800 million users? 100 million users? A thousand? According to Alphabet, Gemini has surpassed 650 million monthly active users, which compares somewhat favorably to the market leader’s 800 million weekly active users.4

Even still, that’s not all of the data available to developers. Alphabet is connected to billions of users throughout their portfolio of other products. Surely this data is also valuable to the company’s engineers developing Gemini.

Is there a data advantage that accrues to the models with the largest user bases? I think so. Is it unique to OpenAI? I doubt it.

Where Do We Go From Here?

If you buy my argument that these models are bona fide commodities, then economics would suggest that the lowest cost producer wins. Will that be OpenAI, who relies on Microsoft for compute resources and Nvidia for GPUs? Or will that be Alphabet, who designs their own TPUs and maintains ownership of their own data centers? I think clearly, the structural advantages reside with the latter.

Going forward, you also have to wonder who will prove to be more adept at integrating their consumer product within a broader ecosystem. Aside from price, commodities also compete on distribution, and LLMs will be no different. The ability for any given model to reach more customers and provide more value will be determined by proprietary distribution channels — applications with their own inherent moats that exist outside of any AI functionality.

OpenAI currently has a limited consumer ecosystem that reaches beyond ChatGPT and related wrapper applications. Alphabet, on the other hand, already offers a productivity suite, a video social network, identity authentication, and a developer cloud among other services. They, along with Microsoft, seem most apt to create a cohesive AI experience for consumers, leveraging their already captured markets. This makes their models exist as embedded features running inside the applications their consumers use every day.

Can businesses in this challenging industry build a moat? Sure, but the moats will not be in the models. The next frontier is cheaply integrating these new products within use cases that exist outside of the indistinguishable prompt windows.

Endnotes:

  1. WSJ.com “OpenAI Declares ‘Code Red’ as Google Threatens AI Lead” Dec. 2, 2025 ↩︎
  2. Based on headline releases from OpenAI, Meta, Google, xAI, and DeepSeek. A full list can be found here. ↩︎
  3. There is not a definitive source of data pointing to specific search engine market share during this period. However, a consensus of academic papers acknowledges the dominant positions of Altavista and Yahoo. See: Karras, I., & Stavroulakis, I. (2015). Standards Wars: Google vs. Altavista (Yahoo), and Seymour, T., Frantsvog, D., & Kumar, S. (2011). History of search engines. International Journal of Management & Information Systems (IJMIS), 15(4), 47–58. ↩︎
  4. According to press releases from OpenAI and Alphabet. ↩︎

Important Disclosures

This page is provided for informational purposes only. The information contained in this page is not, and should not be construed as, legal, accounting, investment, or tax advice. References to stocks, securities, or investments in this page should not be considered investment recommendations or financial advice of any sort. Appalaches Capital, LLC (the “Firm”) is a Registered Investment Adviser; however, this does not imply any level of skill or training and no inference of such should be made. The Firm and its clients may hold positions in securities mentioned. All investments are subject to risk, including the risk of permanent loss. The strategies offered by Appalaches Capital, LLC are not intended to be a complete investment program and are not intended for short-term investment. Any opinions of the author expressed are as of the date provided and are additionally subject to change without notice.