Abbreviated History of Artificial Intelligence

Fernando Pavón

CEO of Gamco

The content of this article synthesizes part of the chapter "Concept and brief history of Artificial Intelligence" of the thesis "Artificial Intelligence". Knowledge Generation based on Machine Learning and Application in Different Sectors, from Fernando Pavón.

Through research on the mechanisms that govern the morphology and the connective processes of the nerve cellsthe father of neuroscience, Santiago Ramón y CajalNobel Prize in Medicine in 1906, developed a revolutionary new theory that came to be called the "Nobel Prize in Medicine".neuron doctrineThe "brain tissue is composed of individual cells. (Wikipedia).

Ramón y Cajal first described the nervous systems of living beings, describing the neuron and synaptic processes. These studies were the basis for AI pioneers to model artificial neurons, giving rise to Artificial Neural Networks (ANNs).

► You may also be interested in: What is Artificial Intelligence

Stages of AI throughout history

Based on the historical division offered by Russell and Norving the following stages and their evolution throughout history can be distinguished:

1. ¿When did artificial intelligence begin?

The first work which is generally recognized as belonging to the Artificial Intelligence was made by Warren McCulloch and Walter Pitts. Both proposed an artificial neuron model in which each neuron was characterized by an "on-off" state.

The switch to the "on" state occurred in response to stimulation by a sufficient number of neighboring neurons. The researchers showed that any computable function can be programmed by a network of connected neurons, as well as that all logical connections (and, or, not, etc.) can be implemented by simple network structures.

McCulloch and Pitts also suggested that artificial neural networks could learn.

Donald Hebb developed a simple rule to modify the weight of connections between neurons. His rule, called "Hebbian Learnig", is still a useful model today.

In 1950, Marvin Minsky and Dean Edmons built the first neural computer: SNARC, which simulated a network of 40 neurons.

Minsky continued to study universal computation using neural networks, although he was quite skeptical about the real possibilities of artificial neural networks. Minsky was the author of influential theorems that demonstrated the limitations of artificial neural networks. 

We cannot end this brief review of the principles of Artificial Intelligence without mentioning the influential work "Artificial Intelligence: The Role of Artificial Intelligence".Computing Machinery and Intelligence" by Alan Turing, where the famous Turing test was introduced from proposing his famous question, "Can machines think?" This 1950 article posed questions that have evolved over time into the current concepts of Machine Learning, Genetic Algorithms and Reinforcement Learning. 

2. How did artificial intelligence begin? 

Throughout the history of Artificial Intelligence we can place the "official birth" in the summer of 1956 at Stanford's Dartmouth College.

The father was John McCarthy, who convinced Minsky, Claude Shannon, and Nathaniel Rochester to bring together the most eminent researchers in the fields of automata theory, neural networks, and the study of intelligence to organize a two-month workshop in the summer of 1956.

The Dartmouth conference did not introduce any groundbreaking lines, but the emerging field of Artificial Intelligence was dominated by the participants and their students for the next two decades.

At Dartmouth, it was defined why a new discipline is needed instead of grouping AI studies within one of the existing disciplines.

Main reasons why AI should be considered a new discipline:

  • AI aims to duplicate human faculties such as creativity, self-learning or the use of language.
  • The methodology used comes from computer science and Artificial Intelligence is the only specialty that tries to make machines that can function autonomously in complex and dynamic environments. 

3. Great Expectations (1952-1969)

These were years of great enthusiasm because some very promising work appeared:

  • IBM developed some programs based on Artificial Intelligence. Among them, a system was created capable of proving geometric theorems that even mathematics students found difficult.
  • Arthur Samuel created in 1952 a program to play checkers that was able to "learn to play". In fact, the program ended up playing better than its creator. The program was shown on television in 1956.
  • In 1958, McCarthy created the language Lispwhich became the dominant language for AI for the next 30 years.
  • The neural networks introduced by McCulloch and Pitts also underwent important developments.
  • The Adeline network, based on Hebb's learning rule, appeared.
  • The convergence theorem of the Perceptronwhich ensured that the learning algorithm could adjust the connection weights of a perceptron so that it would adapt to any function defined by the input variables. 

4. A Dose of Reality (1966-1973)

Many researchers in the new field of AI made bold predictions that never came to pass.

Herbert Simon (Nobel Prize in 1978) came to predict in 1957 that machines could think, learn and create, so that they would surpass the human mind itself. Evidently, it has been proven to be false, at least up to the present time.

There were also resounding failures in programming machine translators from Russian to English in the 1960s. These failures caused the U.S. government to withdraw funding for research into the development of translators in 1996.

Likewise, the combinatorial explosions of many of the problems addressed by AI proved to be computationally unsolvable. Evolutionary algorithms or genetic algorithms were computationally very expensive and, in many cases, did not reach any conclusion.

One of the main difficulties of AI centered on fundamental limitations of the basic structures used to generate intelligent behavior. For example, in 1969 Minsky and Papert proved that, although the perceptron could learn anything that could be represented, the reality is that it could actually represent very few things. 

5. Knowledge-based systems (1969-1979)

Another important point to keep in mind throughout the history of artificial intelligence was in 1969 when the "expert systems". These change the approach that Artificial Intelligence has followed until now: finding a solution to a complete problem from a process of "reasoning" divided into simple principles.

Expert systems are based on more complex rules or principles from a much more specific field of knowledge, which, in many cases, practically means that the answer to the problem posed is almost known.

One of the first expert systems was the DENDRAL program (Dendritic Algorithm), developed at Stanford, which solved the problem of determining molecular structure from mass spectrometer information. 

6. Artificial Intelligence starts to be an industry (1980-present)

In the early 1980s, AI started to become an industry, mainly in the United States, where companies emerged with working groups dedicated to developments based on expert systems, robotics and artificial vision, as well as the manufacture of the necessary hardware and software.

For example, the first commercial expert system, called R1, started operating at DEC (Digital Equipment Corporation) in 1982, and assisted in the configuration of orders for new computer systems.

In 1986, the company estimated that the system had saved $40 million in one year.

By 1988, DEC had developed 40 expert systems, DuPont had 100 in use and 500 in development, with an estimated savings of $10 million per year. 

7. The Return of Artificial Neuron Networks (1986-present)

In the mid-1980s, several research groups made progress in the development of the back-propagation learning algorithm for neural networks. Specifically, for the Multilayer Perceptronoriginally developed in 1969.

This algorithm was applied to many learning problems and the dissemination of the results in the Parallel Distributed Processing papers caused a great deal of excitement. 

Currently, progress is being made in the use of tools that implement neural networks, even using developments in the cloud (cloud computing). This makes it possible to use tools for training, validation and use of artificial neural networks, as well as to "share" them among researchers or developers around the world. 

8. AI adopts the scientific method (1987-present)

From the late 1980s to the present, there has been a revolution in both the content and methodology of Artificial Intelligence work.

In recent years, it has become more common to build on existing theories than to develop new ones. In this way, these theories are being endowed with the mathematical rigor they require, which is making it possible to implement their efficiency in real problems rather than in simulations or simple laboratory examples.

In methodological terms, the Artificial Intelligence has firmly embraced the scientific method.

Share:
3 contributions of Artificial Intelligence to the telecommunications sector

Artificial intelligence is increasingly used and applied in many sectors, and as it could not be less, it has entered with force in the field of [...]

Read More »
Types of artificial intelligence according to their capabilities and functionality 

Unlike a computer program, in which a list of commands are processed through a computer program, AI goes beyond the [...]

Read More »
Main applications of AI in enterprises

Leading AI applications such as most apps are within the reach of many companies and allow large amounts of data to be analyzed and analyzed in a very [...]

Read More »
AI in the energy sector: main use cases

There is a consensus among executives of the world's leading companies about the crucial impact that Artificial Intelligence (AI) will have on the [...]

Read More »
See more entries
© Gamco 2021, All Rights Reserved - Legal notice - Privacy - Cookies