AI s Modern Timeline


AI's Modern Timeline

While volumes could be written about the history and progression of AI, this section will attempt to focus on some of the important epochs of AI's advance as well as the pioneers that shaped it. In order to put the progression into perspective, we'll focus on a short list of some of the modern ideas of AI beginning in the 1940s [Stottler Henke 2002].

1940s-Birth of the Computer

The era of intelligent machines came soon after the development of early computing machines. Many of the early computers were built to crack World War II enemy ciphers, which were used to encrypt communications. In 1940, Robinson was constructed as the first operational computer using electromagnetic relays. Its purpose was the decoding of German military communications encrypted by the Enigma machine. Robinson was named after the designer of cartoon contraptions, Heath Robinson. Three years later, vacuum tubes replaced the electromechanical relays to build the Colossus. This faster computer was built to decipher increasingly complex codes. In 1945, the more commonly known ENIAC was created at the University of Pennsylvania by Dr. John W. Mauchly and J. P. Eckert, Jr. The goal of this computer was to compute World War I ballistic firing tables.

Neural networks with feedback loops were constructed by Walter Pitts and Warren McCulloch in 1945 to show how they could be used to compute. These early neural networks were electronic in their embodiment and helped fuel enthusiasm for the technique. Around the same time, Norbert Wiener created the field of cybernetics, which included a mathematical theory of feedback in biological and engineered systems. An important aspect of this discovery was the concept that intelligence is the process of receiving and processing information to achieve goals.

Finally, in 1949, Donald Hebbs introduced a way to provide learning capabilities to artificial neural networks. Called Hebbian learning, the process adjusts the weights of the neural network such that its output reflects its familiarity with an input. While problems existed with the method, almost all unsupervised learning procedures are Hebbian in nature.

1950s-The Birth of AI

The 1950s began the modern birth of AI. Alan Turing proposed the "Turing Test" as a way to recognize machine intelligence. In the test, one or more people would pose questions to two hidden entities, and based upon the responses determine which entity was human and which was not. If the panel could not correctly identify the machine imitating the human, it could be considered intelligent. While controversial , a form of the Turing Test called the "Loebner Prize" exists as a contest to find the best imitator of human conversation.

AI in the 1950s was primarily symbolic in nature. It was discovered that computers during this era could manipulate symbols as well as numerical data. This led to the construction of a number of programs such as the Logic Theorist (by Newell, Simon, and Shaw) for theorem proving and the General Problem Solver (Newell and Simon) for means-end analysis. Perhaps the biggest application development in the 1950s was a checkers-playing program (by Arthur Samuel) that eventually learned how to beat its creator.

Two AI languages were also developed in the 1950s. The first, Information Processing Language (or IPL) was developed by Newell, Simon, and Shaw for the construction of the Logic Theorist. IPL was a list processing language and led to the development of the more commonly known language, LISP. LISP was developed in the late 1950s and soon replaced IPL as the language of choice for AI applications. LISP was developed at the MIT AI lab by John McCarthy, who was one of the early pioneers of AI.

John McCarthy coined the name AI as part of a proposal for the Dartmouth conference on AI. In 1956, researchers of early AI met at Dartmouth College to discuss thinking machines. As part of McCarthy's proposal, he wrote:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can be in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans , and improve themselves . [McCarthy et al. 1955]

The Dartmouth conference brought together early AI researchers for the first time, but did not reach a common view of AI.

In the late 1950s, John McCarthy and Marvin Minsky founded the Artificial Intelligence Lab at MIT, still in operation today.

1960s-The Rise of AI

In the 1960s, an expansion of AI occurred due to advancements in computer technology and an increasing number of researchers focusing on the area. Perhaps the greatest indicator that AI had reached a level of acceptability was the emergence of critics . Two books written during this period included Mortimer Taube's Computers and Common Sense: The Myth of Thinking Machines , and Hubert and Stuart Dreyfus's Alchemy and AI (RAND corporation study).

Knowledge representation was a strong theme during the 1960s, as strong AI continued to be a primary theme in AI research. Toy worlds were built, such as Minsky and Papert's "Blocks Microworld Project" at MIT and Terry Winograd's SHRDLU to provide small confined environments to test ideas on computer vision, robotics , and natural language processing.

John McCarthy founded Stanford University's AI Laboratory in the early 1960s, which, among other things, resulted in the mobile robot Shakey that could navigate a block world and follow simple instructions.

Neural network research flourished until the late 1960s following the publication of Minsky and Papert's Perceptrons: An Introduction to Computational Geometry . The authors identified limitations of simple, single-layer perceptrons, which resulted in a severe reduction of funding in neural network research for over a decade .

Perhaps the most interesting aspect of AI in the 1960s was the portrayal of AI in the future in Arthur C. Clarke's book, and Stanley Kubrick's film based upon the book, 2001: A Space Odyssey . HAL, an intelligent computer onboard a Jupiter-bound spacecraft, murdered most of the crew out of paranoia over its own survival.

1970s-The Fall of AI

The 1970s represented the fall of AI after an inability to meet irrational expectations. Practical applications of AI were still rare, which compounded funding problems for AI at MIT, Stanford, and Carnegie Mellon. Funding for AI research was also minimized in Britain around the same time. Fortunately, research continued with a number of important developments.

Doug Lenet of Stanford University created the Automated Mathematician (AM) and later EURISKO to discover new theories within mathematics. AM successfully rediscovered number theory, but based upon a limited amount of encoded heuristics, reached a ceiling in its discovering ability. EURISKO, Lenet's follow-up effort, was built with AM's limitations in mind and could identify its own heuristics as well as determine which were useful and which were not [Wagman 2000].

The first practical applications of fuzzy logic appeared in the early 1970s (though Lotfi Zadeh created the concept in the 1960s). Fuzzy control was applied to the operation of a steam engine at Queen Mary College and was the first among numerous applications of fuzzy logic to process control.

The creation of languages for AI continued in the 1970s with the development of Prolog (PROgrammation en LOGique, or Programming in Logic). Prolog was well suited for the development of programs that manipulate symbols (rather than perform numerical computation) and operates with rules and facts. While Prolog proliferated outside of the United States, LISP retained its stature as the language of choice for AI applications.

The development of AI for games continued in the 1970s with the creation of a backgammon program at Carnegie Mellon. The program played so well that it defeated the world champion backgammon player, Luigi Villa of Italy. This was the first time that a computer had defeated a human in a complex board game.

1980s-An AI Boom and Bust

The 1980s showed promise for AI as the sales of AI-based hardware and software exceeded $400 million in 1986. Much of this revenue was the sale of LISP computers or expert systems that were gradually getting better and cheaper.

Expert systems were used by a variety of companies and a number of scenarios such as mineral prospecting, investment portfolio advisement, and a number of specialized applications such as electric locomotive diagnosis at GE. The limits of expert systems were also identified, as their knowledge bases grew larger and more complex. For example, Digital Equipment Corporation's XCON (system configurator) reached 10,000 rules and proved to be very difficult to maintain.

Neural networks also experienced a revival in the 1980s. Neural networks found applications for a variety of different problems such as speech recognition and other problems requiring learning.

Unfortunately, the 1980s saw both a rise and a fall of AI. This was primarily because of expert systems' failings. However, many other AI applications improved greatly during the 1980s. For example, speech recognition systems could operate in a speaker-independent manner (used by more than one speaker without explicit training), support a large vocabulary, and operate in a continuous manner (allowing the speaker to talk in a normal manner instead of word starts and stops).

1990s to Today-AI Rises Again, Quietly

The 1990s introduced a new era in weak AI applications (see Table 1.1). It was found that building a product that integrates AI isn't sought after because it includes an AI technology, but because it solves a problem more efficiently or effectively than with traditional methods . Therefore, AI found integration within a greater number of applications, but without fanfare.

 
Table 1.1: AI Applications in the 1990s ( adapted from Stottler Henke, 2002)

Credit Card Fraud Detection Systems

Face Recognition Systems

Automated Scheduling Systems

Business Revenue and Staffing Requirements Prediction Systems

Configurable Data Mining Systems for Databases

Personalization Systems

A notable event for game-playing AI occurring in 1997 was the development of IBM's chess-playing program Deep Blue (originally developed at Carnegie Mellon). This program, running on a highly parallel supercomputer, was able to beat Gary Kasparov, the world chess champion.

Another interesting AI event in the late 1990s occurred over 60 million miles from Earth. The Deep Space 1 (DS1) was created to test 12 high-risk technologies, including a comet flyby and testing methods for future space missions. DS1 included an artificial intelligence system called the Remote Agent, which was handed control of the spacecraft for a short duration. This job was commonly done by a team of scientists through a set of ground control terminals. The goal of the Remote Agent was to demonstrate that an intelligent system could provide control capabilities for a complex spacecraft, allowing scientists and spacecraft control teams to concentrate on mission-specific elements.




Visual Basic Developer
Visual Basic Developers Guide to ASP and IIS: Build Powerful Server-Side Web Applications with Visual Basic. (Visual Basic Developers Guides)
ISBN: 0782125573
EAN: 2147483647
Year: 1999
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net