ai-fundamentals
10 min read
View as Markdown

From AI Winter to AI Summer: The Cycles That Shaped Machine Intelligence

AI has nearly died twice. First in the 1970s, then in the late 1980s. Each time, overpromise killed the field. Now we're in the longest AI summer ever. Will this one last?

Robert Soares

The word “winter” sounds gentle. Snow on branches. Quiet streets. Hot coffee. But in AI, winter means something brutal: labs closing, researchers fleeing to other fields, funding evaporating, and entire decades of progress stalling while the rest of computing races ahead.

AI has experienced two major winters. Both nearly killed the field. Both were caused by the same basic pattern: researchers promised too much, delivered too little, and funders got burned.

Now we’re in what might be the longest AI summer in history. Investment keeps breaking records. ChatGPT reached 100 million users faster than any application ever built. The question that haunts anyone paying attention: are we building something real this time, or setting up for the biggest winter yet?

The Pattern Nobody Wanted to See

Here’s something strange. Both times, people inside the field saw the winters coming. They warned everyone. Nobody listened.

In 1984, AI researchers Roger Schank and Marvin Minsky stood up at the annual meeting of the American Association of Artificial Intelligence and delivered a warning that would prove prophetic within three years. They had lived through the first AI winter in the 1970s, watching funding collapse after researchers failed to deliver on machine translation and other ambitious promises.

Now they saw it happening again. Expert systems were hot. Corporations were spending over a billion dollars annually on AI. Everyone wanted in.

Schank and Minsky told the business community: this enthusiasm is excessive, disappointment is inevitable, and you should prepare for another winter.

The business community did not prepare.

Three years later, the collapse began.

Billions Evaporated

The second AI winter, roughly 1987 to 1993, devastated the field in ways that seem almost impossible now given current enthusiasm levels and investment flows.

Personal computers had at most 44MB of storage in 1986, making it impractical to build the large knowledge bases that expert systems required, and development costs became impossible to justify for individual companies when simpler approaches could handle most business needs at a fraction of the expense.

Specialized AI companies like Lisp Machines and Symbolics had created expensive hardware optimized for AI programming and charged premium prices for their proprietary systems. Then Apple and Sun Microsystems released general-purpose workstations that matched their performance at a fraction of the cost. The specialized hardware market collapsed around 1987. These weren’t small companies. They vanished.

Japan’s Fifth Generation Computer Systems initiative, which had invested roughly $500 million to create massively parallel computers for AI, was shut down after a decade of work. The U.S. Strategic Computing Initiative, which DARPA had funded with dreams of machines that could “see, hear, speak, and think like a human,” had its budget slashed. One account described the cuts as “deep and brutal.”

The Insurance Company That Won and Lost

On Hacker News, a user named blankfrank shared a perspective that captures the complicated reality of expert systems during that era, writing about their experience building AI at an insurance company from 1988 to 1996: “Our expert system ran on PCs in our 30 branches then migrated to a mainframe.”

The system “captured $4 million in additional revenue.” That sounds like success. But the developer later questioned the social cost when staffing was reduced after the project completed. The technology worked. It also eliminated jobs.

This was the pattern. Expert systems delivered narrow value while failing to transform entire industries as promised. The gap between expectation and reality created the conditions for winter.

What Expert Systems Got Wrong

The expert systems that drove the 1980s boom seemed like a good idea at the time, and on paper the concept made perfect sense. Capture human expertise in rules. Let computers apply those rules. Scale expertise infinitely.

Digital Equipment Corporation’s XCON system configured orders for VAX computers, reportedly saving the company $40 million over six years. Success stories like this attracted massive corporate investment, and by 1985 companies worldwide were spending over a billion dollars annually on AI hardware, software, and consulting.

But expert systems had fundamental problems that became clear only at scale. They couldn’t learn. Every new situation required a human expert to write new rules. They suffered from the “qualification problem,” which is just a fancy term for saying they couldn’t anticipate unusual inputs and would produce grotesque mistakes when faced with anything unexpected.

Another Hacker News commenter, KineticLensman, who worked on expert systems in the early 1990s, found that the rule-based approach worked well for “getting domain experts to articulate the heuristics they used,” but the real value ended up being the process of knowledge extraction rather than the AI system itself. The GUI they built for their frame-based component “was repurposed for a successful in-house modelling tool” that outlasted the AI.

The AI died. The tool survived.

The 2012 Moment Everything Changed

What ended the second AI winter wasn’t a single breakthrough but a convergence that took decades to materialize: better algorithms, faster hardware, and vastly more data all arriving at approximately the same moment.

The pivotal event came in 2012 when a team from the University of Toronto, led by Geoffrey Hinton, won the ImageNet competition using a deep neural network they called AlexNet. Their error rate was 15.3%. The second-place system, using traditional techniques, had an error rate of 26.2%.

That gap changed everything.

Neural networks had existed since the 1950s. Backpropagation, the training technique that makes deep learning possible, was popularized in the 1980s. But hardware wasn’t powerful enough. Data wasn’t plentiful enough. The networks stayed shallow.

By 2012, both constraints had relaxed. GPUs originally designed for video games turned out to be perfect for neural network training because they could perform thousands of matrix operations in parallel. The internet had generated enormous datasets. The ImageNet database alone contained over a million labeled images, all categorized by humans.

Researchers realized that bigger networks trained on more data consistently performed better, and this observation, later formalized into scaling laws, drove progress from 2012 to today: more parameters, more training data, and more compute reliably produced better results.

The Longest Summer

We’ve now been in continuous AI summer for over thirteen years. Investment keeps growing. In 2025, large tech companies are projected to spend $364 billion on AI infrastructure, a number that would have seemed unimaginable a decade ago.

The current boom differs from previous ones in important ways that matter for assessing whether this summer will last. Earlier AI summers produced research papers and government contracts, interesting work that rarely touched ordinary people. This one is producing products that hundreds of millions of people use daily. ChatGPT has 800 million weekly active users. 92% of Fortune 500 companies use it.

But a Hacker News user called “jerf” offered a useful distinction in an October 2024 discussion about whether an AI winter was approaching: “You seem to be unable to separate the concept of ‘hype’ from ‘value.’ The original ‘AI Winter’ was near-total devastation. But it’s probably reasonable to think that after the hype train of the last year or two we’re headed into the Trough of Disillusionment.”

The Gartner hype cycle describes a pattern where new technologies go through an “inflated expectations” phase followed by a “trough of disillusionment” before eventually reaching a “plateau of productivity” where the technology finds its proper role. Many observers think AI is entering that trough now.

The Warning Signs

Gary Marcus, an AI researcher and cognitive scientist who has been skeptical of large language models for years, wrote in his 25 AI Predictions for 2025: “Corporate adoption is far more limited than most people expected, and total profits across all companies (except of course hardware companies like NVidia, which profits from chips rather than models) have been modest at best. Most companies involved have thus far lost money.”

This echoes the pattern from previous winters with uncomfortable precision. Companies invest heavily. Returns disappoint. Funding contracts.

Consultants estimate that the current wave of AI infrastructure spending will require $2 trillion in annual AI revenue by 2030 just to justify the investment, which is more than the combined 2024 revenue of Amazon, Apple, Alphabet, Microsoft, Meta, and Nvidia. The math is challenging.

Technical limitations persist too. Hallucinations remain unsolved. Systems confidently generate false information with no indication that they’re making things up. Reasoning capabilities, despite impressive demos that circulate widely on social media, break down on novel problems that require genuine understanding.

Why This Time Might Actually Be Different

Previous AI winters happened because the technology simply could not do what was promised regardless of how much money or time was invested. Expert systems couldn’t handle unexpected inputs. Neural networks of the 1960s couldn’t learn complex patterns. The gap between promise and capability was fundamental.

Today’s gap is different. The technology demonstrably works for many tasks. The question is whether it works well enough, reliably enough, for enough tasks, at low enough cost, to justify current investment levels.

That’s a business question more than a technical question.

AI tools are reducing customer support costs at companies of all sizes. They’re increasing programming efficiency by handling boilerplate code and catching bugs. They’re automating content generation, data analysis, and research tasks that previously required hours of human labor. These aren’t research projects sitting in labs. They’re deployed products saving companies money.

The applications are real even if the hype around artificial general intelligence may not be. As one technology analyst put it, AI has become “the new Excel. Everyone uses it, but experts still dominate.”

The Honest Assessment

Another AI winter is possible. Investment could contract. Startups could fail. Hiring freezes could spread across the industry. The field could enter a period of consolidation and reduced ambition.

But a full winter, where AI research retreats to a handful of academic labs and the technology disappears from mainstream use? That seems unlikely. Too many real applications exist. Too much infrastructure has been built. Too many people have integrated these tools into their daily work.

What’s more likely is what one Hacker News commenter called an “AI fall.” A cooling of expectations. A shift from “AI will replace all jobs” to “AI is a useful tool that requires human oversight.” A migration of investment from speculative research toward proven applications.

What the History Teaches

The people who lived through previous winters have useful perspective that current AI enthusiasts would benefit from hearing. The technology was oversold then too. But each winter was followed by real progress. Neural networks survived the second winter and eventually became the foundation of the current boom.

The lesson isn’t that hype is harmless or that winters don’t hurt. The lesson is that useful technology survives disappointment.

AI winters prune overgrowth. They end careers and close companies. They clear out the speculators and the charlatans. They don’t end the field.

Practical Guidance

If you’re working with AI professionally, the history suggests a few practical lessons.

Build on capabilities that exist today, not ones that might exist tomorrow. Expert systems failed partly because companies invested in theoretical future capabilities rather than proven current ones. The organizations that survived previous winters focused on narrow applications where AI demonstrably helped.

Expect a shakeout in vendors. Not every AI startup will survive the next few years. Choose tools from companies with sustainable business models and real revenue, not just impressive demos and venture funding.

Develop genuine expertise. When the hype fades, people who understand how these systems actually work become more valuable, not less. The developers who thrived after previous winters were those who understood the technology’s real capabilities and limitations.

And recognize that the history of AI is not a straight line upward. Progress comes in waves. Understanding those waves helps you position yourself to benefit from summer while preparing for the possibility of fall.

Ready For DatBot?

Use Gemini 2.5 Pro, Llama 4, DeepSeek R1, Claude 4, O3 and more in one place, and save time with dynamic prompts and automated workflows.

Top Articles

Come on in, the water's warm

See how much time DatBot.AI can save you