I woke up one day stressing about writing my PhD thesis in October of 2022, and ChatGPT is all over the news. Artificial intelligence (AI) is finally intelligent. And in which way? A chatbot. The most hated way of communicating with computers. Chatbots are very reminiscent of frustrated customer service experience, when we keep spamming it with: “Let me speak to an agent.” (that was before agent started to refer to AI agents).

Nonetheless, the AI hype kept increasing, and most people are getting more and more worried by being replaced by ChatGPT. Two years later, most people are now using AI in their jobs and are getting more productive as a result. These large language models (LLMs) turned out to hallucinate in unexpected ways, making workers with more attention to details more likely to catch their mistakes, while sloppier workers produce poorer outcomes as a result. One conclusion everyone agrees on is: “my job is too complex to be replaced by AI, but I am pretty sure others will be replaced quickly!”

I was asked by a graduate student, “aren’t you worried AI will take over our jobs soon?” And I laughed. There is no shortage of problems I would like to solve in science, and as much as LLMs can help with any of them, that would be less problems to worry about! I do not doubt LLMs ability in helping with science, as much as they help with any other job, but doing the whole task end-to-end is beyond the realm of current, let alone extrapolating to new scientific problems and rationalizing its way there. Scientists are rebutting each other, and sometimes even themselves just to reach the consensus of the worldview that matches reality. The very deep ontological errors scientists make and cannot be instantly certain about their validity are unlikely to be safely delegated to LLMs.

Furthermore, looking at the world around us, one could see how simply computers and the internet could help with reducing bureaucracy, from forms to fill to calls to make and signatures to write. Yet, the world we are in has not got rid any of these bureaucratic practices. These so-called “E-Mail jobs” are created and kept for the mere reason of ensuring “robustness” in the world we are in. The world is created and run by humans who communicate with other humans. No one wants to have to deal with a stubborn AI agent who closed their bank account by some weird preset behavior. Or an AI agent who decides who gets a promotion based on some obsecure algorithm. The human is the center of the economy and always will be.

But this article is not meant to be about the socioeconomics of AI. Rather, there is another observation that I made that I think is worth sharing. The rise of AI has made people believe in computations. Before, simulations are thought to be an idealized version of the world that has nothing to do with reality; it is frequently inaccurate, and since it is unverifiable, it is simply noise and need to focus on the signal, the real world. After the AI boom, people are more receptive of computations and simulations overall. If AI can process the world, so can a computer (shocker!). Suddenly people are interested in deep tech and the next trend (since the master plan of replacing bureaucrats has failed) is to do AI + Science.

Again, much of AI + Science is not tailored for replacing scientists, neither it is for discovering new science actually. It is merely the hope that we can “find” the new drug molecule or the new catalyst through generative AI. Some of it is simply accelerating scientific procedures and tools through the use of AI which is a win-win-win, because who likes to memorize all syntax of all software stack most scientists use or search in piles of paper to find the effect of some protein-protein interaction, for example. More importantly, this use case is actually the most valid one, since LLMs are inherently interpolative rather than extrapolative, and “finding” a new solution that we have missed is certainly within the realm of possibility. (Oh yeah those “emergent” abilities, turns out still limited, and people getting excited about them only proves that the interpolative ability is the only expectation)

At the end of this, I would like to forecast predictions about AI in the next 5 years. I think it is very unsustainable how much AI companies are spending for each LLM call. This cost will cause companies to switch to much smaller distilled versions for the masses, while saving the most advanced reasoning ones for paid users and big corporates, or even exclusive to governments. The smaller LLMs can be coupled with bigger LLMs that are only called on demand, which will eventually become a hierarchical structure. There will certainly be advances such that the LLM searches for knowledge within its own database on demand, rather than have it all ready all the time. But the most important forecast is… the AI boom will go to bust.

The bust is inevitable with the current negative operational profit for LLMs, but mainly because they cannot keep up with their promises. Certain CEOs of certain AI companies have been relentlessly upping their predictions of what AI will do in the future. Yet it is clear, we are hitting the limits of what LLMs can do very fast. The winner will be the companies with reasonable promises and those with enough capital to withstand the storm. Yet, two things will remain for sure. One, the perception by the general public of the importance of deep tech and simulation to solve the world’s problems. Two, the huge data processing centers and infrastructure will remain to be used by the rest of us, and of course will be used for simulation.