
Neural Networks emerged as a computational approach inspired by biological brain structures, with foundational work beginning in the 1940s at the University of Chicago and Princeton University. Warren McCulloch and Walter Pitts published their seminal paper in 1943, describing artificial neurons as mathematical models of biological brain cells.
The field experienced a "AI Winter" during the 1970s after early expectations proved unrealistic. However, research continued at institutions like MIT, Carnegie Mellon University, and Stanford University. Yann LeCun advanced convolutional neural networks in the 1990s at Bell Labs in New Jersey, while Jürgen Schmidhuber pioneered long short-term memory (LSTM) networks in Switzerland during the same period.
By 2020, neural networks had become fundamental to artificial intelligence applications worldwide, from natural language processing to computer vision systems.
Reference: