How likely is AGI in the next five years? A look at money vs. science
I came across Deutsche Bank’s latest report on AI, and it sparked an interesting thought experiment: how likely is it that we’ll see AGI (AI that thinks and learns like a human) within the next five years?
The report highlights a fascinating divergence: the view from money vs. the view from science.
Money: the probability inferred from trillions poured into data centers, Nvidia chips, and servers. Investors seem to be betting that AGI is inevitable.
Science: the probability inferred from research papers and AI development models. Experts are far more cautious, suggesting the realistic probability is only 20%.

What I take from this:
AI is real and transformative, but the hype around AGI might be ahead of reality. Expect volatility, overpromised outcomes, and valuable lessons along the way. Meanwhile, AI continues reshaping work, attention, and human behavior.
I’d love to hear the community’s perspective:
Are you seeing this “money vs. science” gap reflected in your own AI projects or investments?
How do you approach AI opportunities with both caution and ambition?

Replies
Hi Alina,
I think we're a way off yet. I saw Daniela Amodei's comments and she's right but also wrong. Our initial thoughts on AGI were obviously waay off in terms ow what it actually meant. Yes, for specific use cases, AI will now outperform humans. Until we can put those use cases together and add inference, judgement and long term state/context awareness (as well as a ton of other stuff) we're still far off.
I'm excited about the real world use cases of AI even without AGI though. Yes, there is probably a financial bubble and we're still waiting for the boring but killer workflow. But still.......
SEMrush
@zolani_matebese Well put, agree: in many areas, especially creative and strategic work, AI augments rather than replaces humans. IMO the real value is in building AI + human workflows where AI compounds productivity and helps humans focus on experiments and goal achievement to drive outcomes.
I think it really depends on the context. As other commenters mentioned below, there are already tasks that AI tools outperform humans in performing. However, I do think the extreme hype and investor focus on AI is over-inflating it's potential use and capability. In fact, a risk of this mindset is that the money also controls the creative development and over the next few years, I think we run the risk of more and more cookie cutter "AI products" functioning for the sake of it instead of actually meaningfully transformative digital products (even if AI isn't involved)! Personally, I take a cautious but optimistic view - each AI should prove themselves to their users before we jump on the buzz bandwagon!
SEMrush
@tiasabs AI tools will keep getting sharper at specific tasks because they’re built to optimize within narrow lanes. However, AGI is an entirely different beast. It’s not about outperforming humans at one specific thing, but about general reasoning, transfer learning, judgment, and adapting to new context without being retrained.
Today’s hype often blurs that line. We’re funding faster calculators and calling them thinkers. The real risk, as you said, is ending up with polished, look-alike AI products that solve no real problem.
Lightfern for Email
I'm going to go super deep. Check my profile but I hope you'll find me qualified to speak as someone on the "scientist" curve above. I also was at NeurIPS 2025 and there are some trends that are hitting a wall.
This gap is real. I'm seeing stratospheric pre-seed raises based on teams alone and many investors are about to lose a ton of money.
When I think of AGI, I think of something that can replace a human across almost all scenarios, without prior human conditioning, for an extended period of time. This is subtly different OpenAI's definition, which is to replace most of the productive tasks that a human can do.
On OpenAI's definition, I don't think we're too far off. You can prompt ChatGPT to do most highly skilled jobs. If it doens't involve physical labor and is plugged in to the right apps and tools, chances are it can do tasks that would take a human 2 hours or so to complete. But that's not generalizable.
Humans discover goals, constantly self correct, set subgoals, and explore very well in novel situations. We remember failures and avoid them over extended timescales (and vice versa). Models are not capable of this.
We call this continuous learning. This is still a largely unsolved problem. GPT5.2 isn't going to get better at a task over time. OpenAI researchers needs to collect, compile, and organize a ton of data, run it against the model in training, look at evals, etc. to ensure the model improved over the next training run. There's a lot of human input in making these models better, but real AGI shouldn't require human input to self-improve. You can automate all these components but you quickly run into catastrophic forgetting, not to mention the massive safety problems that arise here.
Rich Sutton's keynote for NeurIPS 2025 goes pretty deep into this but essentially for true AGI, we need to remove ourselves from this process entirely. The other thing is humans explore, self-correct, and set excellent goals and subgoals, both for exploration as well as for task completion. You can't tell an AI today to just "build a thriving business" and have it work properly. We are years away from that capability.
The other thing is that there is some evidence that scaling reasoning training is starting to hit a wall. This paper, "Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model", presents some very surprising findings to me. Basically, RL-tuned models actually lose novel ability to solve unseen problems. What RL is doing feels more akin to advanced pattern matching as opposed to improving true problem solving capability. Essentially, the problem solving capability already exists in the base model, and RL is teasing that out. This is to be expected, but what's unexpected is that there's actually loss in generalization capability. This is very different from how a human learns. If you teach a kid new math concepts, the set of problems they can solve strictly grows and doesn't shrink. So model capability is running up against this wall.
So you need to build a resilient company. You want to be like Amazon during the dot-com boom; endless focus on product, maintain good capital discipline. If you're raising, find a fantastic VC that can distinguish the potential from the hype, and become one of these companies yourselves -- focus on a fantastic, unassailable product.
SEMrush
@dougli Wow, thank you for such a thoughtful and detailed contribution! This is one of the more grounded takes I’ve seen in AGI discussions.
The gap between “can execute many skilled tasks” and “can autonomously discover goals, adapt over time, and learn continuously without human intervention” is foundational. Confusing the two is where a lot of hype and wasted capital are coming from.
Possible, but not likely in the way people imagine.
Money and compute are accelerating insanely fast — that’s the part that is undeniable. We’ll almost certainly get models that feel more general, more autonomous, and more capable across domains in the next 5 years.
But the science side is the bottleneck. We still don’t really understand:
reasoning vs pattern matching
long-term learning without retraining
true world models and agency
So what we’re more likely to see is AGI-like systems in practice, not in definition:
systems that do many things well
agents that work with guardrails
tools that replace large chunks of knowledge work
People will argue “this is AGI” and others will say “no, it’s not,” and both will be kind of right.
My bet: in 5 years we won’t have human-level AGI — but we will have systems that make the distinction feel
SEMrush
@alpertayfurr AI will definitely improve over time. At least, I hope it'll stop producing confident nonsense while getting a prompt outside its training data 😁 But will it be able to think autonomously and create something completely new like humans do? It seems like right now the science doesn't have any clarity on this perspective.