OpenAI is one of the largest names in artificial intelligence, if not the largest name. Backed by Microsoft and a literal ton of other people’s money — Japan’s SoftBank has been instrumental in sourcing around $40 billion of investment in 2025 alone — it’s also got a lot to lose. Perhaps that’s why company head Sam Altman appears willing to say anything, and nothing, to keep the hype going.
Most recently, Altman spoke about the development of AI and his expectations for 2026 (Reuters via Yahoo Finance). He believes that next year AI users “can not only use the system to sort of automate some business processes or fill these new products and services, but you can really say, ‘I have this hugely important problem in my business. I will throw tons of compute at it if you can solve it.’ And the models will be able to go figure out things that teams of people on their own can’t do.”
Steaming pile
He continued, “I would bet next year that in some limited cases, at least in some small ways, we start to see agents that can help us discover new knowledge or can figure out solutions to business problems that are kind of very nontrivial. Right now, it’s very much in the category of, okay, if you got something like repetitive cognitive work, we can automate it at a kind of a low level on a short time horizon.”
The claims become loftier, though situated further away than 2026’s “limited cases, at least in some small ways” developments.
“So what an enterprise will be able to do, we talked about this a little bit, but just like give it your hardest problem if you’re a chip design company, say ‘Go design me a better chip than I could have possibly had before.’ If you’re a biotech company trying to cure some disease, so just go work on this for me. Like that’s not so far away.”
Add some gravy
There’s no faulting Sam Altman’s ambition. Or salesmanship. The man has convinced hordes of investors to part with billions of dollars over the lifetime of OpenAI. He expects to convince many more.
He was fired and then immediately rehired by Microsoft after it acquired its substantial stake in OpenAI in 2023. The prime reason for this judder, at least as far as we know, is that he was returned because most of the company’s staff threatened to follow Altman out the door. That’s loyalty you can’t buy. Perhaps you can indoctrinate it.
You’ll notice from Altman’s quotes above — and almost anything that he says in public — that there are no real details. Everything is aspirational. It will do this. That capability is coming. Here is a vague timeline for when we expect it to appear. And it mostly sounds good. A tool that can design computer chips with high efficiency or discover new medications, or automate your entire life without intervention (science fiction made that last one up) would be good, surely.
But Sam Altman’s pronouncements on OpenAI’s capabilities always resemble the average street magician, or some guy inventing reasons why his landlord isn’t getting any rent this month. They’re the shiny object in one hand that’s marvelled at while, apparently, something is happening elsewhere. But it seems that few people are asking “What is happening elsewhere?’
Is this not appetising?
There’s no reason given for why an OpenAI model cannot yet do what Altman says it will be able to do. The current obstacles are never outlined. Potential solutions are never floated. There’s no sign of any development or progress toward a more capable artificial intelligence. They might as well not exist. Weirdly, nobody ever asks for these explanations. Well, hardly anybody.
Altman’s most recent (and his older) comments have absolutely no substance to them. ‘This will come… later’ is the sum of it. There is no indication of how, exactly when, or what it will take to pull it off. OpenAI might as well be trying to forcibly will new developments into existence. If Altman worked at a small app developer, he’d be the salesman who promises a massive and technical bespoke project to a client “next week Friday” without consulting his coding team.
“Oh, and you can also get the specs to us on Monday. That’ll give you a few days to figure out what you want.”
Could Sam Altman be acting vague so nobody knows what OpenAI is up to? Perhaps. Corporate espionage is a thing. Are we lacking details because nobody else would understand him? This is also known as the Emo Gambit, and it’s not likely. Could it be that Sam and his various associates across the industry just haven’t figured it out? Given how hard it is to keep people quiet, that seems the most logical. Wouldn’t want to spook the investors, after all.
Is it really?
Most signs point towards artificial intelligence models being… well, doomed. Yes, Google just launched Veo 3, which does remarkable things in the eight-second-video niche. The results are certainly impressive. They probably won’t stay that way. AI models, like OpenAI’s GPT, Meta’s Llama, and Anthropic’s Claude, decay at a phenomenal rate.
The problem, as writer and AI sceptic Ed Zitron will point out (repeatedly, because nobody seems to be listening), is that there isn’t enough data to create a long-term AI model that won’t almost immediately disappear up its own rectum. Initial development seems promising but eventually the data generated is sucked back up inside the model which, being stupid, integrates its previous responses as potential data for outputs. It immediately starts to rot, and the longer it continues to function, the worse its responses will get.
Perhaps I’m a pessimist, but I’d like to think that I’m a pessimist based on available data, and nothing supports the claims of ‘more power equals more intelligent AI.’ Eventually, even systems run on OpenAI’s ambitious Project Stargate infrastructure (which doesn’t exist yet and will power artificial intelligence capabilities that also don’t exist yet) will hit the same wall as the solutions we have now.
Previous turds
Over the course of the last five or six years, the technology world has seen several New Big Things™ that were supposed to change the world. Cryptocurrencies were in vogue for a time, and a few still hang about. The big ones sort of make sense; the little ones hang around picking off investors and speculators who never got the memo.
NFTs were going to change the art world until people figured out they were paying large sums of money for links to a server. The blockchain and Web3 were supposed to change the world somehow, even if nobody could ever explain why. Or how. Remember the metaverse?
All of these imploded, either silently or spectacularly, but they’ve all mostly gone away. Why is what OpenAI, Meta, Amazon, Anthropic, and Nvidia are doing any different?
There is one difference. The scale of investment. Okay, two differences. The types of investors. Microsoft, Amazon, Apple, Samsung — these are folks who can take the risk without being completely doomed. They are expected to make a profit, however. That means selling this multi-billion-dollar experiment to anyone who will stand still long enough to make eye contact, like an even nerdier version of Pokémon. It doesn’t matter if it works correctly; the balance sheets are due at the end of the quarter.
The only folks making money from AI are those who supply the infrastructure. Everyone else, Sam Altman and OpenAI included, loses billions annually. And for what? Advanced chatbots, mostly, because nobody has figured out how to get AI to do anything useful yet.
The next big push? Agentic AI. Microsoft has a local presentation scheduled for tomorrow about the tech, and I’d love for them to make me eat these words, but Agentic AI seems to be a more advanced version of the humble chatbot. But powered by AI, so it’s all shiny and exciting. I hope to be proven wrong. But I can’t help adding “powered by the blockchain” to the end of every claim of what AI can currently do and giggling a little. Because it all really seems that silly. About as possible, too.




