1. Artificial Intelligence
Kismet, a face bot. The parts of his
face move to show emotion.
An Isearch research paper
By:
Andrew Ilyas
May 13, 2010
2. Introduction
For years now, humans have been working with Artificial Intelligence, trying
to create intelligent machines. Machines that are faster and smarter than the
human brain. One question that still remains unanswered in AI is whether a
computer will ever be smarter than mankind. This research discusses the use of
AI in fun games, and attempts to answer the question stated above. It explains
and gives different definitions for Artificial Intelligence, covers the history and
search methods. Also, it will highlight on some of the applications of AI being
used in Robotics. In addition, the paper introduces Game Theory and talks about
the future of Artificial Intelligence. Finally, it concludes by answering my Big
Think question stated above, and provides further references in AI for interested
readers.
What is AI?
Definitions of AI
“AI is the science of making machines do things that would require
intelligence if done by men”- Marvin Minsky, MIT
“The field of computer science that seeks to understand and implement
computer-based technology that can simulate characteristics of human
intelligence”-The Facts on File Dictionary of Artificial Intelligence, by Raoul
Smith
“Computers with human-level intelligence; computer programs that perform
tasks once thought to require human flexibility and judgment”- Artificial
Intelligence, by Philip Margulies
“It is the science and engineering of making intelligent machines, especially
3. intelligent computer programs. It is related to the similar task of using computers
to understand human intelligence, but AI does not have to confine itself to
methods that are biologically observable.” John McCarthy
“The capability of a device to perform functions that are normally associated
with human intelligence, such as reasoning and optimization through
experience.”-[1]
The Turing Test
Alan Turing (1912-1954) was a British Mathematician famous for the Turing
machine. The Turing machine was a theoretical computer that includes a head
that can read and write, and an infinite tape. The head will read, and depending
on the input, will either move left or right. He is also famous for cracking the
German Code Enigma. Right now, Turing is so respected that the highest honor
you can get in computer science is the Turing award.
One of the common questions discussed when working with Artificial
Intelligence is “How do we know when machines are intelligent?” Does it have to
excel in every topic, or be good at one? Do machines have to be aware of their
own existence? Some argue that if a machine is equal to a human in all fields, that
the machine is intelligent. When Mind Magazine asked Alan Turing “Can
Machines Think?” He took all these definitions into consideration, and then came
up with a test. Turing called his test “imitation game”. It would consist of three
intelligent beings, two humans and one computer. Just so it does not get too
confusing, let the two humans be A and B, and the computer be C. A would have a
long typed conversation covering many topics with B and C. It must be typed and
in separate rooms, because the way C speaks or how it looks like should not affect
how intelligent it is. If A, B, and C are all in the same room during the
conversation, A would know that C was a computer. Also, A must cover many
topics, so that C cannot impress A with its great knowledge in one topic. At the
4. end of the imitation game, A would try to guess which of B and C is the computer.
If A does not know or gets it wrong, then the computer is intelligent. This “Turing
test” is often mentioned when the progress of AI is discussed.
History
For a machine to be intelligent, it must be able to reason, learn from
experience, set goals for itself, and adapt to the world around it. A machine that
can do these things is the machine that humans have been trying to build for a
long time.
In 1642, Blaise Pascal invented the first “computer”. In our days this would
be called a calculator, but back then, anything that could do advanced
mathematical calculations was called a computer. Since his father was a tax
collector, Pascal invented his machine to help his father count the taxes. This
calculator could add and subtract. This invention was also the birth of artificial
intelligence.
Inspired by Pascal, the philosopher and mathematician Gottfried Leibniz
made a more sophisticated machine that could add, subtract, multiply, and find
the square root of numbers using gears and pulleys. These mathematicians led
the way into computers and artificial intelligence. For example, Gottfried Leibniz
did not like the English language for computing that was being used then. In his
“new” language, no two words meant the same thing, and no word meant two
different things. Although the technology of the 1600’s was not advanced enough
for his language, Leibniz’s “perfect language” is the foundation for the
programming languages that we use today.
The 1840’s mathematician Charles Babbage almost made the first computer
in the modern sense. In the 19th century, the government needed a great deal of
calculations to be done. Thousands of people did this job, but it was boring,
needed lots of concentration, and had many errors. To take this job over, Babbage
5. invented two giant machines. His first, the difference engine, could do advanced
mathematical problems. He made a working model of this, but did not have
enough money to finish the machine. His second, more ambitious machine was
the analytical engine. This was an ongoing project, but could not be finished
because of the primitive technology of the 1800’s.
In the next hundred years, the world needed computers. Finally, the first
computer was built in 1951. The inventor is still debated on. As the years advance,
computers get faster, and their parts get twice as small and compact, but it is
always the same design as the computer in 1951. (See AI Over Time section for
timeline, explanation, further AI history, and my future AI timeline)
Game Search Methods
Brute Force
Although some AI programs can do intelligent tasks faster than humans can,
they do not think in the same way. Computers are much faster, have a bigger
memory, and can search larger databases than humans. What Brute Force (also
known as exhaustive search) does is make use of all the advantages of the
computers over humans. It makes use of the fact that every transistor in a
computer is about one million times faster than the human brain. It searches all
possible ways that a program can do something, and then picks the best one.
The advantage to Brute Force is that it is always right. This is because it is
examining all the possibilities, so it cannot miss the best solution. Since computer
parts have been getting smaller and more powerful, programmers have had more
freedom to use Brute Force.
The main disadvantages to Brute Force are that it takes a long time, needs
large memory, and sometimes takes up a lot of space. For example, if Brute Force
is being used for something like playing a game of chess, this might be a problem.
The program would need to search all possible moves for itself (as a player) and
6. for the opponent. This will consume a long time for the program to respond to
each move. It would also need a large amount of space to store the information.
In fact, some problems are categorized because the only correct solution you can
find for the problem is by using the Brute Force method. These are called NP-
Complete problems, which means that their most efficient complexity is (a
constant) to the power of n, where n is not a constant number.
An example of an NP-Complete problem is the Knapsack Problem. You have
a knapsack that can only carry a certain weight. You have n items. Each item has
a value and a weight. The Knapsack Problem asks for the placement of objects
inside the knapsack such that it will have the greatest value, but not surpass the
weight that the knapsack can sustain. In this case, the most efficient algorithm
has a complexity of 2n x θ. Theta is a symbol used to represent a constant number
in complexity theory.
Although Brute Force is always right, it still does not think like the human
brain does. After all, many scientists point out, airplanes are not expected to fly
like birds. When the future of AI is debated, the comparison between human
intelligence and machine intelligence is usually brought up.
Brute Force is one of the many search algorithms used in AI. Others that
were not listed are MinMax, Alpha-Beta pruning, A* search, Blind search, etc.
Heuristics
Heuristics are rules that are set to narrow down computers’ searches. This
saves the computer from having to use Brute Force and look at all possibilities.
This guessing problem was one of computers’ limitations before heuristics. The
program Logic Theorist (LT), made by Allen Newell and Herbert Simon, two AI
pioneers, first addressed it.
Logic Theorist was made in the 1950’s and designed to prove already known
mathematical theorems. Since the search space for theorem proving is infinite,
they could not use Brute Force like programmers did with everything until LT. If
7. they were to use Brute Force, the program would have taken all the space and
time in the universe. So Newell and Simon’s strategy was to teach the computer
to make educated guesses, almost like the way that humans make decisions. If
humans made decisions the same way that computer programs did before LT,
they would be overwhelmed all the time. Simon and Newell called all their
“guessing rules” heuristics.
Nowadays, heuristics allow programs to make fast responses and narrows
down their search to what usually works, not everything. Fingerprint
Identification systems and Credit Card Fraud Identification both use heuristics to
narrow down searches. Other systems that use heuristics are the expert systems
that predict the weather treat diseases, and book airplane flights.
Applications of AI
Limitations and Advantages of Computers
Before discussing the applications of AI and the use of computers in these
applications, this section summarizes the advantages and limitations of using
computers.
Advantages:
1- Can calculate problems that would take humans years, and can do them in
seconds. For example, in the Deep Blue vs. Kasparov Match, Deep Blue could
calculate 200 million chess positions a second, while man can only do 2 a second.
2- Giant Long- Term Memory. For example, in the Deep Blue vs. Kasparov
Match, Deep Blue remembered every single move that Kasparov had made.
3- Does not fatigue. For example, Fingerprint identification and Credit Card
Fraud check are two things which have been enhanced with AI because of the
ability to not get tired of what they are doing.
Limitations:
8. 1- Most Computers do not learn from experience. If a computer was learning to
walk, it wouldn’t try to put different muscles together like a baby would, but it
would follow a specific set of instructions to walk.
2- Cannot make quick decisions based on experience
3- Cannot make connections. For example, if a Google searcher searched “George
Washington”, it might get him info about George Washington Baked Beans,
Washington Ave., and the first president, although if you are looking up George
Washington, you are probably searching for information about the first
president. This may annoy some searchers.
4- Cannot understand the human language. If I said, “I hate pepper”, this could
mean many things. It could be a response to “I hate horses”, or someone might be
offering you pepper, or someone might say, Joe Pepper is coming to the movies
with us, and in response you might say, “I hate pepper”, but a computer would
never understand the difference.
Neural Networks
A Neural Network is a program designed to simulate the human brain and
its neurons. People have found similarities between computers and the human
brain long ago. But since then, some big differences have been found. For
example, computers do not learn by trial and error, but instead they follow a
specific set of instructions that tell them what to do.
Neural networks try to learn in the same way that a baby would learn to
walk. Instead of following a set of instructions, the baby moves one muscle at a
time. Sometimes the baby succeeds, and sometimes it fails. Each move that it
takes is directed by the brain and accompanied by a connection of the neurons in
the brain. If the baby fails to walk, the connection of neurons that produced the
failing movement would be shut down. If the baby succeeded, the neural
connection will be kept open.
Computer learning skills that are developed for neural networks are used
9. today in a class of computer programs called “expert systems”. These systems use
information from human experts such as doctors, lawyers, etc. to make the kinds
of decisions that the experts would make themselves. Expert systems also learn
from experience and get better at their job the longer they are doing it.
Fingerprint Identification is one of the many jobs made easier by expert
systems. The police needs to compare fingerprints found at a crime to thousands
of other fingerprints across the country. The reason why expert systems help
fingerprint identification is that the job needs the ability to recognize patterns, a
high degree of judgment, and the inhuman ability to not get tired of working.
Another problem that is made easier by expert systems is the detection of credit
card fraud. This also requires a high degree of judgment and the ability to search
through loads of data. What the credit card fraud identification expert system
does is that it uses artificial intelligence and scans all transactions made on the
card. It then reports any suspicions of credit card fraud to a manager. Neural
Networks can do a lot for the modern world.
Robotics
200 years ago, people who did mathematical problems were called
computers. Today, computers are complex machines that contain electrical
circuits that store loads of information in code. Also, the computer is used to
control the most complex machine ever invented by man - the robot. Robots can
work in any condition, without getting tired, and can do it faster than humans.
The word “Robot” was first used by Karel Capek in his play “RUR (Rossum’s
Universal Robots)”, a play about robots that took over their masters. Anyway, the
word “robot” originated from the Czech word “robota,” meaning, labour. When it
was first used, it had no exact definition.
Virtual Reality is a new invention that uses our modern technology in a
different way. By linking our sight, hearing and touch to the computer with
sensors, Virtual Reality may be a giant breakthrough for AI. Scientists believe
10. that in the future, surgeons in one country will be able to do surgery on a patient
in another country.
Robots come in all shapes and sizes, but the most common is the mechanical
arm. This is also one of the simplest robots around us today. Scientists describe
the ways that robots can move as their Degrees of Freedom. Robots with one
hinge joint have one degree of freedom, while industrial robots that can move at
the waist, arms, elbows, etc., can have six degrees of freedom. Another type of
robot is a face bot. A face bot is a robot that is shaped like a human face and can
show emotions on it. Kismet, a face bot, does not look completely human, but he
can move his ears, eyes, eyebrows, eyelids, and mouth to show different
emotions.
Robotic AI can be used in many ways. For example, college students realized
that they could program Lego robots to play soccer and started an international
soccer tournament called Robocup that featured a ball that sends out infrared
signals. Now, the Robocup has a junior section for elementary school, middle
school, and high school students. Each year, they meet to see who has the best
robot. The activities at Robocup Jr. include an Aibo (Programmable robotic dog)
Soccer League, dancing, and a robot rescue game, which simulates a real robotic
rescue. Another use of Robot AI is in factory work. In factories, Robots weld
smash, put together, and load materials. This needs the ability not to fatigue and
to be able to work in any condition.
Another type of robot is a chatterbot. Chatterbots are online robots that
interact with humans and can encage in a conversation. Chatterbots are not as
complex as they seem, but they hide all their faults by redirecting the
conversation to you. For example, in 1996, Joseph Weizenbaum made Eliza. Eliza
was a relatively simple program that turned around people’s phrases. For
example, if you said “How are you doing?” instead of saying "Good," or "Bad," it
wood redirect the sentence and say " Why are you so interested whether I am
doing or not?" When Eliza first came out, many people made strong emotional
11. bonds with her, and some psychiatrists asked Weizenbaum if they could
recommend human patients to her. Eliza does this so she does not have to answer
the questions that might trip her up. By asking you personal questions, she makes
the “patient" think about him instead of thinking about her mistakes. Also, the
online chat rooms and instant messaging make it easier for people to accept
Eliza.
Honda Japan leads the world's humanoid robots with Asimo, the most
advanced humanoid robot in the world. A humanoid robot is a robot that tries to
simulate humans. He has two legs, two arms, and a head. These kinds of robots
are the hardest to program, because, unlike humans, they do not have natural
balancing systems. Asimo is 51'' tall and 18'' wide and weighs 115 pounds. He is
constructed with magnesium alloy and coated in plastic, which allows him to be
very lightweight and durable. Asimo has three indicator lights:
1. White- Ready for operation
2. Red- Ready to walk
3. Green- Low-level power on
A 51.8V Lithium-ion battery that lasts for an hour after a single charge
powers him. The battery takes up about thirteen pounds of Asimo's weight and is
stored in his backpack. He was built with 34 degrees of freedom and opposable
thumbs. Also, with visual sensors on his head, and kinesthetic (force) sensors on
his wrists, Asimo can synchronize with human movement. Asimo's running is at
about 6km/h and his stride is about 1.7' long. Asimo's intelligence abilities are:
1. Charting a course- The ability to chart a course around obstacles
2.Recognizing moving objects
3.Distinguishing sounds
4.Recognizing faces and gestures
One of the limits to robots is that they cannot do anything or respond to
anything outside their program.
12. Games
Velena
What is Velena?
Velena is a Connect-4 computer game that uses AI. It is based on a thesis by
L. Victor Allis. Velena uses a known mathematical approach that consists of eight
rules. With these rules, Velena can win the game if she plays first, no matter how
well her opponent plays. The program is a Shannon C-Type Program. This
means it uses a knowledge-based approach and tries to simulate the human mind
to take decisions.
Rules and Terms of Connect-4
Each game can be described as a sequence of moves, which means that if we
label every column with the letters a through g, and the rows 1 through 6, we can
describe every move, and therefore, every game, as a sequence of moves. For
example, the game in Fig. 1 can be described as:
Moves O X
1 d1 e1
2 e2 f1
3 f2 g1
4 g2 d2
5 f3 c1
6 e3 d3
7 f4 f5
8 g3 e4
9 g4 ++
where ++ symbolizes the end of the game, (the inability to move).
13. Terminology
Odd Square: A square that is in an odd row
Even Square: A square that is in an even row
A Group: Four men connected, vertically, horizontally, or diagonally
A Threat: Three men of the same type (X or O) connected, and with the fourth
square that forms the group empty and the square below it empty
Odd Threat: A threat where the empty square that completes the group is an odd
square
Even Threat: A threat where the empty square is even
Double Threat: There are two groups which share an empty odd square;
Each group is filled with only two men (of the same color) and the other two
squares (one for each group) are empty and are one above the other. The square
below the shared square must be empty too.
Game Strategy
Before we start to construct a game strategy, let it be noted that we are
considering white as O and black as X. The first step in constructing a game
strategy is noting that after white has moved, the number of squares left on the
board are odd, and that after black has moved, an even number of squares are
left. From this it was proven that if white has an odd threat, and black cannot
connect 4 men anywhere, white will eventually win, and the same with black if it
has an even threat and white cannot connect four men anywhere. If white has an
odd threat and black has an even threat, and the two threats are in different
columns, white will win. If they are in the same column, the lower threat will win.
Velena’s strategy differs depending on if she plays white or black. When she
is white, she uses her database to always get to an odd threat position, and then
win the game from there. When she is black, she follows the longest winning
route for white and tries to stop it.
To use brute force with Velena, it would take up terabytes of space, so
14. instead, the program tries to predict the outcome of the game using mathematics.
When constructing a Connect-4 program, there are two strategies used. The
first one tries to stop your opponent from winning, but trying to connect 4 men at
the same time. This strategy guarantees invulnerability in the short run, but
tends to fail on the long run, because it cannot see past the first few moves of the
game. The second strategy is to take a win on the long run. Most Connect-4
algorithms implement the first strategy with a variation of Alpha-Beta Pruning, a
type of search method.
Game Complexity
In a Connect-4 board, each slot has three states: either it’s occupied by
white, occupied by black, or not occupied at all. Since there are 42 slots (7
columns x 6 rows), our game complexity would be 3(possible states of a1) x 3
(possible states of a2)... etc., which would be 342. This is approximately 1020. But
this is an upper bound complexity, since we are counting all the illegal positions
as well. After subtracting the number of illegal positions, we get 71 x 1012, which is
still a very large number. Although Connect-4 is not as trivial as Tic-Tac-Toe, its
game complexity is not as large as that of chess, and most of the moves are
repeated. For example, for white to win, the first seven moves are forced, so they
repeat a lot.
One problem there is with calculating the game complexity of Connect-4 is
checking if a position is illegal. This can be very hard sometimes. For example, is
Fig 3 illegal?
15. Fig 3.
Illegal?
The answer is yes. Since white starts the only possible position that they
could have played is d1. If black played b1,d2, or f1, white would not have a move.
So black can play a1, c1, e1, or g1. Let’s say black played a1. The only move white
has is a2. Similarly, the only moves black has that can be responded by white are
c1, e1, and g1. If this cycle goes on the farthest we will ever get is Fig. 4:
Fig. 4
Closest we can get to Fig. 3
Fig. 3 and Fig. 4 demonstrate the difficulty to detect positions’ legality. This
factors in to building a Connect-4 program’s database and figuring out the game
complexity.
Board sizes
16. When using Velena, you will notice that you cannot change the board size
from a standard 7x6 board. This is because of a proven theorem that says if white
starts on any 2nx6 board, Black can at least get a tie by following these steps:
1.If white plays A, B, E, or F, play directly on top of them
2.If white plays C or D for the first time, play the opposing column
3.If white plays C or D again, play directly on top of them
For proof of threat theorems, see Victor Allis’ Thesis, “A Knowledge-Based
Approach of Connect-4, The Game is Solved: White Wins”.
Deep Blue
Deep Blue is a machine programmed by IBM to play chess. After six years of
programming Deep Blue, the IBM team felt that they were ready to challenge the
world champion - Gary Kasparov. In Game 1, 1996, Deep Blue started off with its
first win. But Kasparov learned quickly. He won the match four to two and
confidently proposed a rematch in 1997. Kasparov won the first game in a breeze.
But the next game, Kasparov said, “It played differently, more strongly, unlike a
computer”. In the next three games, man and machine ended in a draw. Then
finally, Deep Blue forced Kasparov into making a poor move. Kasparov resigned.
Deep Blue used Brute Force, but the search looked past the first few moves.
It challenged Kasparov with 256 processors that could search about 200 million
moves per second. Deep Blue analyzed possible outcomes of the game.
Grandmasters coached the programmers at IBM to deepen Deep Blue’s “book”,
its library about how to win. Kasparov cannot try to use the Brute Force
Approach. Instead, he learns what is important from experience, and relies on the
human mind’s ability to recognize patterns.
The loss of Game 2 bore on Kasparov’s mind for the rest of the match. After
game 3, he quit. He was fed up. They had to convince him to get to the table to
play game 4 and game 5. “There was no game 6, because I didn’t want to play,” he
17. said.
Although Deep Blue was smart, it did no think in the same way that humans
do. That is still many years and breakthroughs away. After the Deep Blue vs.
Kasparov game IBM retired Deep Blue, and it never played again.
TD-Gammon (Backgammon)
Another use of AI in games is in backgammon. This was first used in the
program BKG 9.8. This backgammon-playing program was made at Carnegie
Melon University by Hans Berliner. In 1979, BKG 9.8 played a backgammon
match against world champion Luigi Villa the day after he had won the world
championship in Monte Carlo. The stakes of the match were at 5,000$. The
program won with a final score of 7 to 1. Despite the score, Villa played better
than BKG 9.8. He played almost all the right moves, while the backgammon
program only play 65 out of 73 correct moves.
Next in the 1980’s, Gerry Tesauro at IBM made a neural network program to
play backgammon. He called it Neurogammon. This program used encoded
backgammon knowledge in its memory of how to play. Neurogammon was also
an expert system. After training on data sets of expert games, it could assign
weights to the pieces of knowledge. The program was good enough to win the
1989 Computer Olympiad.
Tesauro’s next program used temporal difference learning, which means
that instead of learning from games played by experts, it learns from self-played
games. The program was called TD-Gammon (Temporal Difference-Gammon).
The differences between TD-Gammon 0.0 and TD-Gammon 3.0 is a bigger
neural net, more knowledge in the program, and smaller, more selective searches.
TD-Gammon was one of the best backgammon players in the world. At the
AAAI 98’ conference (Association for Advancement of Artificial Intelligence
Conference 1998), TD-Gammon played the current world champion Malcolm
Davis. To reduce the luck factor, the two players played 100 games over the span
18. of three days. In the end, Malcolm Davis won only by eight points. The neural net
of the backgammon-playing program has 300 input values and contains 160
hidden units. Approximately there were 50, 000 weights that were trained. To get
TD-Gammon to its level at the AAAI conference, about 1, 500, 000 games had to
be played.
Game Theory
Game Theory is the branch of mathematics that deals with playing games.
In Game Theory, a winning position is where you have a winning strategy,
in which with a certain set of moves, you can win the game, no matter how well
your opponent plays.
A losing position is where your opponent has the winning strategy.
Rule #1: From a winning position, you can get to a losing position
Rule #2: From a losing position all the positions you can get to are winning
Take the classic count to 10 problem: David and Wesley are playing a game.
In the game, David starts. He can say one, two, or three consecutive numbers.
Then Wesley goes. The game goes on. The person who says 10 loses. Does David
have a winning strategy?
Answer: In this game, nine is the “obvious” losing position, because if you
“receive” the number nine then you have lost. Therefore, eight, seven, and six are
all winning positions, because you can get to nine. Five is losing, because you can
only get to six, seven, or eight, which are all winning positions. So four, three, and
two are all winning positions, and one is losing. Therefore David’s winning
strategy is to say an amount of consecutive numbers such that on his first turn he
stops at 1, on his second turn, 5, and his third turn, 9.
Let’s try a harder problem:
Ian and Larry are playing a game with 3 jars of marbles. On each player’s
19. turn, they must remove the same number of marbles from each of two different
jars. If a player is unable to do so, they lose the game (and their opponent wins).
If it is Ian’s turn and the jars contain 2, 3, and 5 marbles respectively, which
player has a winning strategy?
Just to make it easier: Any position with zero marbles in one jar or with two
jars having the same amount of marbles in them is a winning position. This is
because if there is one zero and two other numbers, your opponent will take away
the same amount of marbles as there are in the jar with the least marbles, leaving
you with two zeros and no legal move, which is a win for them. And if there are
two piles with the same amount of marbles, your opponent will make the two
piles have zero marbles, also leaving you with no legal move.
Answer: For this problem, we can use a Game tree, or a Game Table. Our
game tree (Fig. 5) will have the root branch equal to the current state. Since our
search space is not unimaginably large, our successor nodes can be all possible
states that we can get from this one. We can also represent this as a table.
One of the many possible moves is to take two away from jar 1 and jar 2.
What you can also is take two away from jar 1 and jar 3 or to take one marble
away from jar 1 and jar 2. Since we proved that if we have one jar with no marbles
in it, the position is a winning position, the first two options are both winning.
Since we do not know if the third option is winning of losing, we have to go
further. From 1/2/5, we can only get to: 0,1,5(W), 0,2,4(W), 1,1,4(W), 1,0,3(W),
which are all winning positions. Therefore, 1/2/5 is a losing position.
Here, we do not need to continue, because we have found out that 1/2/5 is
losing by Rule #2, therefore that 2/3/5 is winning, by Rule #1. You can see the
complete tree and table in Fig. 5.
20. 2/3/5(W) 0/1/5(W)
0/3/3(W)
1/2/5(L) 0/1/5(W)
0/2/4(W)
1/1/4(W)
1/0/3(W)
Fig. 5
The tree and table. Notice that they
are not complete because once we
find out that 1/2/5 is losing, we know
2/3/5 is winning
Here are some problems to do by yourself:
a)Alphonse and Beryl are playing a game, starting with a pack of 52 cards.
Alphonse begins by discarding at least one but not more than half of the cards
in the pack. He then passes the remaining cards in the pack to Beryl. Beryl
continues the game by discarding at least one but not more than half of the
remaining cards in the pack. The game continues in this way with the pack
being passed back and forth between the two players. The loser is the player
who, at the beginning of his or her turn, receives only one card. Show, with
justification, that there is always a winning strategy for Beryl.
21. b)(Hypatia ’03) Xavier and Yolanda are playing a game starting with some coins
arranged in piles. Xavier always goes first, and the two players take turns
removing one or more coins from any one pile. The player who takes the last
coin wins. If there are two piles of coins with 3 coins in each pile, show that
Yolanda can guarantee that she always wins the game.
c)Alphonse and Beryl play a game by alternately moving a disk on a circular
board. The game starts with the disk already on the board as shown. A player
may move either clockwise one position or one position toward the centre but
cannot move to a position that has been previously occupied. The last person
who is able to move wins the game.
(1) If Alphonse moves first, is there a strategy which guarantees that he will
always win?
(1)Is there a winning strategy for either of the players if the board is changed
to five concentric circles with nine regions in each ring and Alphonse
moves first? (The rules for playing this new game remain the same.)
AI over time
Time Line
In the following section we describe the timeline depicted in Figure 6.
•Analytical Engine: A Machine made by Charles Babbage in England. His
machine could do “sixty additions or subtractions may be completed and
printed in one minute. One multiplication of two numbers, each of fifty figures,
in one minute. One division of a number having 100 places of figures by another
of 50 can be printed in one minute.” Although Babbage spent 40 years doing
this, he could not afford to finish because the technology of the 19th century was
not advanced enough.
•First Computer Program: The Countess of Lovelace realized that Babbage’s
Analytical Engine could take instructions by punching holes in cards.
22. •Binary Logic: Also called Boolean Logic. The point of Boolean Logic is to
represent a function of a logic gate (Logic gates process signals that are either
true or false which can also be shown as 1 and 0), and this can be shown in a
truth table. In binary logic, the rules are as follows:
NOT gate: The Output Q is true when the input A is NOT true (false)
AND gate: The output Q is true if A and B are true
NAND gate (NOT AND): The output is true if A and B are NOT true
OR gate: The output Q is true if A OR B is true
NOR (NOT OR) gate: The output is true if A OR B is NOT true
•The Automatic Totalizator: This was invented for tallying up bets for horse
races. The first Automatic Totalizator was so big, it needed its own building!
•The Colossus Computer: This was invented for cracking the German codes
called enigma during World War 2. The British teamed up to do this with the
Polish.
•The Commercial Magnetic Memory Computer: A computer that used magnetic
memory.
•Commercial Robotics: Unimation started making commercial robots
•Personal Computer: The first PC
•CD-ROM: The first CD-ROM
•Lightweight laptop: This laptop weighed less than two pounds!!
Predictions
First of all, I would like to clarify that these are all my predictions, and are
based on what many other scientists think. These are not all proven to be correct
by any means
•Security recognition: The ability for a machine to recognize faces. This is one of
23. humanity’s current strengths over AI.
•Household Chores: AI operated robots will be able to wash dishes, control air
conditioning, do the laundry, etc.
•Nano-Technology: Tiny AI-controlled robots, will be used for traveling in
human blood stream, and keeping things clean. These robots are no bigger than
the width of a human hair.
•AI Controlled Space Ship: By the end of these 35 years, planes and space ships
will be able to navigate in the space autonomously.
•Virtual Surgery: Surgeons will be using virtual reality (see Robotics), to perform
surgeries in other countries.
•AI surpasses human intelligence: Most scientists think sooner than this
prediction. But Dr. Rodney Brooks argued and compared what we know about
AI now to what people knew about the solar system 500 years ago. Back then,
they knew that the planets moved, but they did not know why. We still do not
know some basic things about AI.
Predicted Future for AI
So far, our predictions for AI have not been very accurate. For example,
scientists predicted that the world chess champion would be beaten in a match
against a machine in the year 1968, while the first time that happened was
actually in 1997, about 30 years off. However, AI researchers are still optimistic
about its future. For example, there is a prediction that by 2050, everything will
use AI in some way, although some researchers argue that this is already in
place.
Fuel injection systems for cars use learning algorithms. Jet turbines use
genetic algorithms. More examples are email, cellphones, X-ray reading systems,
and systems that book airplane flights. According to Dr. Rodney Brooks, the
director of the Massachusetts Institute of Technology’s Artificial Intelligence lab,
24. our research position and knowledge of AI is about the same as the state of
Personal Computers in 1978. Ray Kurzweil, the author of two AI books, The Age
of Spiritual Machines and The Age of Intelligent Machines, says that popular
intelligent machines like HAL, Commander Data in Star Trek, and David in the
film AI are not very far away. Dr. Brooks believes that by 2030, we will have the
basic template of intelligence. Then, Dr. Brooks reminded, “who thought that by
2001, you would have four computers in your kitchen?” pointing to the computer
chips in your fridge, coffee makers, stoves and radios.
Conclusion
In conclusion, Artificial Intelligence will surpass human intelligence. Although it
has proven itself to be similar to the human brain, computers do not think in the
same way. There have been many problems found, and their solutions are still a
few years away. Nevertheless, AI has had some strength over humanity, and the
future of AI remains uncertain. In this report, we have discussed applications of
AI and their impact on our lives. AI games provide entertainment, introduce new
algorithms, and give a challenge, not only to its human opponent; but to other
Artificial Intelligence programmers too.
Bibliography/Works Cited
Books
Margulies, Philip. Artificial Intelligence. Michigan: Blackbirch Press, 2004
Graham, I., Gwynn-Jones, T., Lynch, A., Parker, S., & Wood, R. Science. Australia:
! Weldon Owen, 2001
Jefferis, David. Artificial Intelligence, Machine Evolution and Robotics. St. Catharineʼs,
25. ! Canada: Crabtree, 1999
Hyland, Tony. How Robots Work. Minnesota: Smart Apple Media, 2007
Flanagan, David. Java in a Nutshell. Sebastopol: OʼReilly, 1996-97
Barr, Avron & Fiegenbaum A., Edward. The Handbook of Artificial Intelligence Volume
One. Stanford: William Krauford, Inc., 1981
Smith, Raoul. Artificial Intelligence. New York: Facts on File, Inc., 1989
Crevier, Daniel. AI: The Tumultuous History of the Search for Artificial Intelligence. New
! York: BasicBooks, 1993
Levitin, Anany. Introduction to Design & Analysis of Algorithms. City Unkown: Pearson
! Addison-Wesley, 2007
Magazine/Newspaper Articles
Schaeffer, Jonathan, “A Gamut of Games.” AI Magazine. Vol. 22 No. 3, (Fall 2001):
! 29-46.
Anderson, Kevin(2001, September 21). Predicting AIʼs Future. BBC News. Retrieved
! from http://bit.ly/bvgLJR
Videos
Deep Blue Beat G. Kasparov in 1997. Eustake, Youtube, 2007. URL: http://bit.ly/bbY6b0
Game Over: Kasparov vs. Machine. argishtib, Youtube, 2007. URL: http://bit.ly/cZXE4I
Websites
Honda. http://asimo.honda.com/InsideAsimo.aspx. Honda Motor Co. Inc., 2010
University of Waterloo Faculty of Mathematics. http://www.cemc.uwaterloo.ca/events/
mathcircles/2009-10/Senior_Feb3.pdf. CEMC, 2010
University of Waterloo Faculty of Mathematics. http://www.cemc.uwaterloo.ca/events/
mathcircles/2009-10/Senior_Feb10.pdf. CEMC, 2010
University of Waterloo Faculty of Mathematics. http://www.cemc.uwaterloo.ca/events/
mathcircles/2009-10/Senior_Feb17.pdf. CEMC, 2010
[1] Author Unknown. http://www.its.bldrdoc.gov/fs-1037/dir-003/_0371.htm. 1996
26. McCarthy, John. http://www-formal.stanford.edu/jmc/whatisai/node1.html. Stanford,
! 2007
Rudnik, John. http://www.math.ca/Competitions/COMC/. Canadian Mathematical
! Society, 2010
CEMC. http://bit.ly/af7gGI. University of Waterloo, 2010
Hewes, John. http://www.kpsec.freeuk.com/gates.htm#nand. The Electronics Club,
! 2010
Powerhouse Museum. http://bit.ly/b4alxY. The Australian Academy of Technological
! Sciences and Engineering
Author Unknown. http://didyouknow.org/ai/. Did You Know, 2010
Research Papers
Bertoletti, G. (1997). Connect-4 [Data file]. Retrieved from http://pd-resource.net/
! Connect-4/Velena/
Pictures
Paone, Joe.http://castercomm.files.wordpress.com/2009/10/binary.jpg. Wordpress, 2009
Allis, Victor. A Knowledge-Based Approach to Connect 4. http://www.connectfour.net/
Files/connect4.pdf, 1998.
Ogden, Sam. http://web.mit.edu/museum/img/exhibitions/Kismet_312.jpg. MIT, 2008
Honda. http://bit.ly/abfTcR. Honda Motor Co. Inc., 2010
Author Unknown.http://bit.ly/14kDB8. IBM, Year Unknown
Peiretti, Federico. http://bit.ly/97dxHz. TuttoLibri, 2007
Robers, Eric S. The Art and Science of C. Addison-Wesley Publishing Company, 2005
CBS Interactive. http://news.cnet.com/2300-11386_3-6084282.html. cnet news, 2010