Neural Networks : A look at the past

Neural Networks : A look at the past

A brief throwback at the history of Neural Networks (5min read)

The story begun when Warren McCulloch a neurophysiologist and Walter Pitts a logician teamed up to study the relation between biological neuron activity the mathematical logic, they ended up by developing an artificial neuron model in their paper "A Logical Calculus of the Ideas Immanent in Nervous Activity", they concluded by saying that :

Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms.

In simple terms they realized that a simple biological neuron could be represented using addition and thresholding. their work was influential and influenced a psychologist by the name of Frank Rosenblatt, who pushed the idea even further and developed the "Artificial neuron" he worked on building the first device capable of applying these principles, he wrote In “The Design of an Intelligent Automaton,

An MIT professor named Marvin Minsky (who was a grade behind Rosenblatt at the same high school!), along with Seymour Papert, wrote a book called Perceptrons (MIT Press) about Rosenblatt’s invention. They showed that a single layer of these devices was unable to learn some simple but critical mathematical functions (such as XOR). In the same book, they also showed that using multiple layers of the devices would allow these limitations to be addressed. Unfortunately, only the first of these insights was widely recognized. As a result, the global academic community nearly entirely gave up on neural networks for the next two decades.

In the 1980s, most models were built with a second layer of neurons, thus avoiding the problem that had been identified by Minsky and Papert. And indeed, neural net‐ works were widely used during the ’80s and ’90s for real, practical projects. However, again a misunderstanding of the theoretical issues held back the field. In theory, adding just one extra layer of neurons was enough to allow any mathematical func‐ tion to be approximated with these neural networks, but in practice such networks were often too big and too slow to be useful.

Although researchers showed 30 years ago that to get practical, good performance you need to use even more layers of neurons, it is only in the last decade that this principle has been more widely appreciated and applied. Neural networks are now finally living up to their potential, thanks to the use of more layers, coupled with the capacity to do so because of improvements in computer hardware.

resources :