Chapter 1. Introduction

 < Day Day Up > 

If someone casually mentions artificial intelligence (AI), several different associations may come to mind. Some may remember the movie 2001: A Space Odyssey and the crazed on-board computer HAL. Some may think of Steven Spielberg's recent movie AI, which told the sad story of an orphaned robot essentially a modern-day version of Pinocchio. Others may think of the great chess champion Garry Kasparov being defeated by IBM's Deep Blue in 1997 and of the rematch in 2003 that resulted in a tie.

What may not immediately come to mind is the fact that many aspects of AI are already in wide use in the business world and that more applications are on the way.

There are many definitions of AI, but in general, the definitions describe machines that have the ability to think and reason like an intelligent human. Yet there is debate about exactly what makes humans intelligent. Psychologists continue to develop new definitions of intelligence and new tests to measure it. What you believe about intelligence depends a lot on the theory of development to which you subscribe. For instance, behaviorists believe that intelligence is measured by a person's ability to communicate effectively.

Furthermore, some people are intelligent in one area but not another. Abilities that constitute intelligence in one area may not be effective in other areas. Ask a brilliant mathematician to draw you a picture and you might end up with a stick figure. It gets even trickier when you consider such things as emotional intelligence, or maturity.

So how do we know if and when AI has been created? A very good question. It is one that scientists in many fields have been trying to answer for over fifty years. The sidebar titled "Historical Highlights of AI" is a synopsis of important events during this time period. In the beginning, expectations were high, but unfortunately many researchers underestimated the scope of the problem.

Although we are now closer to a definition of artificial intelligence, we are not necessarily close to producing machines with "real AI." By this I mean computer systems that are able to act and, most important, react as a human would.

We cannot build an android like Star Trek's Data anytime soon. But this does not mean that we cannot design some very usable applications utilizing AI technologies in the meantime.

Historical Highlights of AI

  • In 1950, Alan Turing published a paper in which he asked whether machines can think. He later proposed the famous "Turing test," which is now a litmus test for AI. The original Turing test involved placing a man and a woman in separate rooms and having a judge, who could not see them, try to determine which was which. The test was quickly modified so that a computer took the place of the woman and the judge communicated with both the man and the computer. If the judge could be fooled into identifying the computer as the person, then it passed the test.

  • The term "artificial intelligence" was first used in 1956 when John McCarthy coined the phrase. This happened at a Dartmouth summer conference where several mathematicians and scientists had gathered to discuss the potential of the field. During this time, research was focused on natural language understanding and problem solving. It was believed that as soon as better hardware became available, "true" AI would be achieved.

  • The 1960s and 1970s brought about a dose of reality for AI researchers. They soon realized how complex human thinking really was. Games seemed to benefit the most from AI technologies, since they involved a limited set of rules. Arthur Samuel designed a checkers game that eventually was able to beat a former state checkers champion.

  • As computer usage increased in the 1980s, many companies found themselves with lots of data but no useable information. That is when expert systems made their grand appearance. All of a sudden companies like American Express were able to increase profitability by cutting down on bad credit authorizations. For the first time, an AI-based technology offered the thing businesses were really interested in money.

  • The 1990s brought about big hardware advancements in terms of computer speed and storage capacity. This enabled many AI researchers to push past some of the physical limitations that once hindered them. Advances in speech recognition and artificial senses were becoming more common. Neural nets were being used in many financial applications, and companies like Volvo were using genetic algorithms to solve manufacturing problems.

  • In the new century, the field of AI quietly continues to advance, and its techniques are used in many different industries. Large banks use AI technologies to analyze credit card transactions and detect fraudulent behavior. In attempts to build better control systems, car-makers are installing voice recognition in many of their luxury models. Some schools are experimenting with intelligent tutoring systems designed to give students individualized instruction.


     < Day Day Up > 


    Building Intelligent  .NET Applications(c) Agents, Data Mining, Rule-Based Systems, and Speech Processing
    Building Intelligent .NET Applications(c) Agents, Data Mining, Rule-Based Systems, and Speech Processing
    ISBN: N/A
    EAN: N/A
    Year: 2005
    Pages: 123

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net