Ask six AI researchers when we’ll achieve human-level artificial intelligence and you’ll get six different answers. That’s exactly what happened at a November 2025 roundtable for the Queen Elizabeth Prize for Engineering, where Jensen Huang, Yoshua Bengio, Geoffrey Hinton, Fei-Fei Li, Yann LeCun, and Bill Dally all gave completely different timelines for the same question.
This disagreement isn’t a bug. It’s a feature. It tells us something important about the current moment: we’re in a period where the people who know most about AI genuinely don’t know what happens next.
The Reliable Trend: Things Keep Getting Better
Here’s what almost everyone agrees on. Models will continue to improve. The same trajectory that took us from GPT-3 to GPT-4, from Claude 2 to Claude 3.5, will keep extending forward because the fundamental drivers of progress remain intact.
Dario Amodei, CEO of Anthropic, described the progression in concrete terms on the Lex Fridman podcast: “We’re starting to get to PhD level, and last year we were at undergraduate level, and the year before we were at the level of a high school student.”
Whether that progression continues at the same rate is uncertain, but the direction seems clear enough that betting against it would be unwise in the short term.
What does this mean practically? More capable coding assistants. Better writing tools. Smarter research helpers. The things AI already does reasonably well will get noticeably better over the next few years, likely enough that the tools from early 2025 will feel primitive by 2027.
Simon Willison, a software developer who has been documenting his experience building with LLMs extensively, wrote about how his workflow changed in 2025: “Coding agents changed everything for me.” He described building dozens of tools by prompting LLMs, working asynchronously with AI agents, and even developing code from his phone with enough confidence to land changes in production projects.
That’s not science fiction. That’s a working developer describing his current practice. Multiply this across millions of knowledge workers and you have a sense of the shift already underway.
The Multimodal Horizon
Text, images, audio, video. The walls between these modalities are crumbling faster than many expected.
Epoch AI’s analysis of what AI will look like in 2030 projects significant advances: implementing complex scientific software from natural language descriptions, assisting mathematicians in formalizing proofs, answering open-ended questions about laboratory protocols. These aren’t wild speculation. They’re extrapolations from current benchmark progress.
But extrapolation is tricky. Every capability chart eventually hits constraints. Training data runs out. Compute costs scale exponentially. Diminishing returns set in. The question isn’t whether these barriers exist but when they’ll become binding.
Agents: The Next Frontier (Probably)
The big bet right now is on AI agents. Not just AI that answers questions, but AI that takes actions, uses tools, works across systems, and completes multi-step tasks with minimal human oversight.
This is where the hype gets thick. The reality: agents remain brittle. They fail in surprising ways. They require careful orchestration to remain useful. A 2024 Hacker News discussion captured the skepticism well, with user talldayo predicting that “In 10 years, AI and LLMs will be a joke on The Simpsons in the same way they made fun of the Palm Pilot.”
That’s probably too pessimistic. But it’s a useful corrective to the breathless agent announcements from every major lab. The gap between demo and deployment remains wide. Agents that work flawlessly in controlled conditions often struggle with the messiness of real workflows.
Still, even incremental improvement in agent reliability unlocks significant value. If agents can reliably handle 70% of routine tasks instead of 30%, that’s transformative for many workflows. Perfection isn’t the bar. Usefulness is.
The AGI Question: Where Experts Actually Disagree
When does AI become generally intelligent? Able to perform any intellectual task a human can? This is where the expert disagreement gets sharpest.
At one end: Dario Amodei suggests we’re running out of “truly convincing blockers, truly compelling reasons why this will not happen in the next few years.” His extrapolation points to 2026 or 2027 as plausible for AI systems matching human-level capability across many domains.
At the other end: Yann LeCun, Meta’s Chief AI Scientist, calls current AGI predictions “complete delusion.” He argues we need machines “as smart as a cat” before worrying about superintelligence, and we’re nowhere close. He sees current language models as demonstrating that “you can manipulate language and not be smart.”
Both of these people are Turing Award winners. Both work with frontier AI systems daily. Both have access to the same research. They just interpret it completely differently.
One commenter on Hacker News, rwaksmunski, made the sardonic observation that “AGI is still a decade away, and always will be.” The thread that followed actually disputed this framing, noting that researchers are only now adjusting expectations toward nearer timeframes based on observed progress. The “always ten years away” joke may be becoming outdated precisely because timelines are compressing.
Jobs: The Honest Answer Is We Don’t Know
Will AI take your job? The predictions range from catastrophic to benign, often depending on who’s making them.
Goldman Sachs: “We remain skeptical that AI will lead to large employment reductions over the next decade.”
Dario Amodei: AI could “wipe out half of all entry-level white-collar jobs” and spike unemployment to 10-20% within the next one to five years.
The World Economic Forum: 92 million jobs displaced, 170 million new jobs created, for a net positive of 78 million positions by 2030.
MIT economist Daron Acemoglu: AI will only be ready to take over or heavily aid “around 5% of jobs over the next decade.”
These aren’t predictions you can average. They represent fundamentally different models of how technology affects employment. Either displacement happens gradually with time for adaptation, or it happens suddenly with massive disruption. We don’t know which model applies to AI.
Federal Reserve Chair Jerome Powell, when asked about AI and employment, said something unusually candid: “This may be different.” That’s the head of the Federal Reserve acknowledging we’re in genuinely uncharted territory.
The honest answer is that we don’t know, and anyone claiming certainty about employment impacts is overconfident. What we do know: preparation beats prediction. Organizations and individuals who understand AI capabilities have more options regardless of which scenario unfolds.
Regulation: The Wild Card
The EU AI Act is now in effect. China has implemented AI regulations. The US remains relatively hands-off. This patchwork will shape AI development in ways we can’t fully predict.
Regulation could slow dangerous capabilities. It could also concentrate AI development in jurisdictions with lighter rules. It could protect workers. It could entrench incumbents. The effects will be second-order and third-order in ways that no one can map confidently today.
What seems clear: the regulatory environment in 2030 will look nothing like today. The question is whether it will enable beneficial AI development while limiting harm, or whether it will do neither effectively.
How to Think About AI Predictions
A useful framework: separate the near-term from the far-term.
Near-term (2025-2027): Current systems will get better. Coding assistants, writing tools, research helpers, image generators. These improvements are mostly predictable from current trajectories. Plan for them.
Medium-term (2027-2030): Agents may become reliable enough for significant autonomous work. Multimodal AI will likely handle many current bottlenecks. Economic impacts will start becoming visible in employment data. Much less certainty here.
Far-term (2030+): AGI possibilities, superintelligence scenarios, transformative economic restructuring. Genuine uncertainty. Anyone claiming to know what happens here is guessing.
One Hacker News commenter, massung, offered a prediction that stuck with me: “My personal prediction is that the next massive leap in AI is going to be a paradigm shift away from how we train and simulate networks.”
That’s the kind of wild card that’s hard to price in. Current progress comes from scaling known approaches. The next breakthrough might come from something entirely different. Or it might never come. We don’t know.
What’s Actually Worth Preparing For
Given all this uncertainty, what’s worth doing now?
Learn the tools. Whatever happens with AGI timelines, current AI capabilities are already useful. Understanding how to use them effectively has immediate payoff and positions you for whatever comes next.
Build adaptability. The specific predictions will be wrong. The general direction of increasing capability seems reliable. Organizations and individuals who can adapt to changing capabilities will navigate uncertainty better than those betting on specific outcomes.
Stay calibrated. The hype cycle is exhausting. The doom cycle is paralyzing. Neither serves you well. Pay attention to what AI actually does in practice, not what demos promise or critics fear.
Watch the practitioners. The most reliable signal comes from people building real things with AI. When developers report that coding agents “changed everything,” that’s more informative than any prediction about when we’ll achieve AGI.
One Hacker News user, mrdependable, captured a common frustration in a 2025 thread: “I always see these reports about how much better AI is than humans now, but I can’t even get it to help me with pretty mundane problem solving.”
That gap between benchmark performance and practical usefulness is real and worth remembering. Progress is happening. It’s just uneven, messy, and often overhyped.
The Prediction That Matters Most
Here’s my prediction: five years from now, we’ll look back at this article and some predictions will have been too conservative, others too aggressive, and at least one thing will have happened that nobody anticipated.
That’s not a copout. It’s the most honest thing I can say about a field where the people who know most disagree most sharply.
The uncertainty is uncomfortable but also clarifying. It means the future isn’t determined yet. The choices we make about how to develop and deploy AI still matter. The debates about safety, about employment, about regulation, about who benefits from these systems… those debates are worth having precisely because the outcome isn’t already written.
What’s certain: AI will continue to change how we work, create, and solve problems. The exact nature of that change remains genuinely open. That’s either terrifying or exciting, depending on your disposition.
I find it exciting. But I understand both reactions.
This article reflects AI capabilities and predictions as of January 2026. Given the pace of change in this field, some information may already be outdated by the time you read this. For current AI tools and practical applications, explore our guides on using AI effectively in your work.