
AI Winters refer to periods of reduced funding, interest, and progress in artificial intelligence research that occurred between the 1970s and early 1990s. These downturns followed initial optimism about AI's capabilities, when researchers and investors discovered that early systems could not deliver on ambitious promises.
The first major decline began in 1974 at MIT in Cambridge, Massachusetts, and other leading institutions. Researchers had overstated what early systems like ELIZA could achieve. The U.S. Department of Defense and the British Science Research Council drastically cut funding by approximately 90 percent. Limitations of early computers—which lacked sufficient processing power to handle complex problems—became apparent.
A second decline occurred when specialized expert system companies collapsed. The market crashed in 1987 as organizations realized these systems required constant maintenance and could not adapt to new problems.
These winters fundamentally changed AI research. By the 1990s at Stanford University and Carnegie Mellon University in Pittsburgh, researchers adopted more modest, focused objectives rather than pursuing artificial general intelligence. This pragmatic shift eventually enabled breakthroughs in machine learning and neural networks.
Reference: