AI as a Commodity

Large Language Models have rapidly become commoditized with little to no differentiation for the average user. There is little distinction except for certain use cases (e.g. Claude Code at the time of this writing being the best for software development) and very specialized domains (e.g. frontier math, phD level science problems, etc.). Over time I expect to see even these gaps between frontier model providers to close and for 99.9% of use cases, it won’t matter which model you use.

With little to no switching costs for the consumer (easy substitution) and minimal switching cost to enterprise clients, everything in AI land is rapidly coming down to two things:

1) Token Cost

2) Distribution

On the Token Cost side of things Alphabet seems to have a strong advantage with their in house developed TPUs and vertical integration. They don’t have to pay the “Nvidia tax” that everyone else using GPUs or purchasing their TPUs has to pay. Over the mid-term I suspect this makes the difference with current stand alone AI companies such as OpenAI, Anthropic, etc having distinctly worse unit economics due to Nvidia’s famously high prices.

(Post written the old fashioned way - via human. No AI used in this writing)

Previous
Previous

How I Prepare for a New Year

Next
Next

A Question Fascinating Me Right Now