Artificial intelligence is increasingly used and applied in many sectors, and as it could not be less, it has entered with force in the field of [...]
Read More »The content of this article synthesizes part of the chapter "Concept and brief history of Artificial Intelligence" of the thesis "Artificial Intelligence". Knowledge Generation based on Machine Learning and Application in Different Sectors, from Fernando Pavón.
Through research on the mechanisms that govern the morphology and the connective processes of the nerve cellsthe father of neuroscience, Santiago Ramón y CajalNobel Prize in Medicine in 1906, developed a revolutionary new theory that came to be called the "Nobel Prize in Medicine".neuron doctrineThe "brain tissue is composed of individual cells. (Wikipedia).
Ramón y Cajal first described the nervous systems of living beings, describing the neuron and synaptic processes. These studies were the basis for AI pioneers to model artificial neurons, giving rise to Artificial Neural Networks (ANNs).
► You may also be interested in: What is Artificial Intelligence
Based on the historical division offered by Russell and Norving the following stages and their evolution throughout history can be distinguished:
The first work which is generally recognized as belonging to the Artificial Intelligence was made by Warren McCulloch and Walter Pitts. Both proposed an artificial neuron model in which each neuron was characterized by an "on-off" state.
The switch to the "on" state occurred in response to stimulation by a sufficient number of neighboring neurons. The researchers showed that any computable function can be programmed by a network of connected neurons, as well as that all logical connections (and, or, not, etc.) can be implemented by simple network structures.
McCulloch and Pitts also suggested that artificial neural networks could learn.
Donald Hebb developed a simple rule to modify the weight of connections between neurons. His rule, called "Hebbian Learnig", is still a useful model today.
In 1950, Marvin Minsky and Dean Edmons built the first neural computer: SNARC, which simulated a network of 40 neurons.
Minsky continued to study universal computation using neural networks, although he was quite skeptical about the real possibilities of artificial neural networks. Minsky was the author of influential theorems that demonstrated the limitations of artificial neural networks.
We cannot end this brief review of the principles of Artificial Intelligence without mentioning the influential work "Artificial Intelligence: The Role of Artificial Intelligence".Computing Machinery and Intelligence" by Alan Turing, where the famous Turing test was introduced from proposing his famous question, "Can machines think?" This 1950 article posed questions that have evolved over time into the current concepts of Machine Learning, Genetic Algorithms and Reinforcement Learning.
Throughout the history of Artificial Intelligence we can place the "official birth" in the summer of 1956 at Stanford's Dartmouth College.
The father was John McCarthy, who convinced Minsky, Claude Shannon, and Nathaniel Rochester to bring together the most eminent researchers in the fields of automata theory, neural networks, and the study of intelligence to organize a two-month workshop in the summer of 1956.
The Dartmouth conference did not introduce any groundbreaking lines, but the emerging field of Artificial Intelligence was dominated by the participants and their students for the next two decades.
At Dartmouth, it was defined why a new discipline is needed instead of grouping AI studies within one of the existing disciplines.
Main reasons why AI should be considered a new discipline:
These were years of great enthusiasm because some very promising work appeared:
Many researchers in the new field of AI made bold predictions that never came to pass.
Herbert Simon (Nobel Prize in 1978) came to predict in 1957 that machines could think, learn and create, so that they would surpass the human mind itself. Evidently, it has been proven to be false, at least up to the present time.
There were also resounding failures in programming machine translators from Russian to English in the 1960s. These failures caused the U.S. government to withdraw funding for research into the development of translators in 1996.
Likewise, the combinatorial explosions of many of the problems addressed by AI proved to be computationally unsolvable. Evolutionary algorithms or genetic algorithms were computationally very expensive and, in many cases, did not reach any conclusion.
One of the main difficulties of AI centered on fundamental limitations of the basic structures used to generate intelligent behavior. For example, in 1969 Minsky and Papert proved that, although the perceptron could learn anything that could be represented, the reality is that it could actually represent very few things.
Another important point to keep in mind throughout the history of artificial intelligence was in 1969 when the "expert systems". These change the approach that Artificial Intelligence has followed until now: finding a solution to a complete problem from a process of "reasoning" divided into simple principles.
Expert systems are based on more complex rules or principles from a much more specific field of knowledge, which, in many cases, practically means that the answer to the problem posed is almost known.
One of the first expert systems was the DENDRAL program (Dendritic Algorithm), developed at Stanford, which solved the problem of determining molecular structure from mass spectrometer information.
In the early 1980s, AI started to become an industry, mainly in the United States, where companies emerged with working groups dedicated to developments based on expert systems, robotics and artificial vision, as well as the manufacture of the necessary hardware and software.
For example, the first commercial expert system, called R1, started operating at DEC (Digital Equipment Corporation) in 1982, and assisted in the configuration of orders for new computer systems.
In 1986, the company estimated that the system had saved $40 million in one year.
By 1988, DEC had developed 40 expert systems, DuPont had 100 in use and 500 in development, with an estimated savings of $10 million per year.
In the mid-1980s, several research groups made progress in the development of the back-propagation learning algorithm for neural networks. Specifically, for the Multilayer Perceptronoriginally developed in 1969.
This algorithm was applied to many learning problems and the dissemination of the results in the Parallel Distributed Processing papers caused a great deal of excitement.
Currently, progress is being made in the use of tools that implement neural networks, even using developments in the cloud (cloud computing). This makes it possible to use tools for training, validation and use of artificial neural networks, as well as to "share" them among researchers or developers around the world.
From the late 1980s to the present, there has been a revolution in both the content and methodology of Artificial Intelligence work.
In recent years, it has become more common to build on existing theories than to develop new ones. In this way, these theories are being endowed with the mathematical rigor they require, which is making it possible to implement their efficiency in real problems rather than in simulations or simple laboratory examples.
In methodological terms, the Artificial Intelligence has firmly embraced the scientific method.
Artificial intelligence is increasingly used and applied in many sectors, and as it could not be less, it has entered with force in the field of [...]
Read More »Unlike a computer program, in which a list of commands are processed through a computer program, AI goes beyond the [...]
Read More »Leading AI applications such as most apps are within the reach of many companies and allow large amounts of data to be analyzed and analyzed in a very [...]
Read More »There is a consensus among executives of the world's leading companies about the crucial impact that Artificial Intelligence (AI) will have on the [...]
Read More »