Looking is a robot called Sophia with

Looking back, significant Artificial Intelligence breakthroughs have been promised ‘in 10 years’ for the past 60 years. By 2020, we believed that we’d have flying cars, but no, all we have is a robot called Sophia with a citizenship wanting to start a family? Not trying to discourage the development in A.I, but our expectations of what it would be now do not meet reality just quite yet. 

In 1936, Alan Turing published ‘On Computable Numbers, with an Application to the Entscheidungsproblem’ (Turing, 1936) that is now recognised as the foundation of computer science. Within the paper, Turing analysed what it meant for a human to follow a definite method or procedure to perform a task. For this purpose, he invented the idea of a ‘universal machine’ that could decode and perform any set of instructions. Two years later, Turing, with help from other mathematicians developed a new machine, the ‘bombe’ — used to crack Nazi ciphers in World War II. Turing also worked on other technical innovations during the war including a system to encrypt and decrypt spoken telephone conversations. Although it was successfully demonstrated with a recorded speech by Winston Churchill, it was never used it action, but it gave Turing the experience of manually working with electronics. After the war, Turing designed the ‘Automatic Computer Engine'(Negnevitsky, 2010), that would look like your early-day computer, but just stored programs in its memory. In 1950, Turing published a philosophical paper where he asked “Can machines think?” (Turing, 1950) along with the idea of an ‘imitation game’ for comparing human and machine outputs, now called the Turing Test. This paper remains his best known work and contribution to the field of A.I. however, this was at the time when the first general purpose computers had only just been built, so how could Turing already be questioning artificial intelligence? 

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

insert bit about Turings predictions 

It was only in 1956, when John McCarthy, American computer scientist, invented the term “artificial intelligence”. This is when McCarthy defined A.I as “the science and engineering of making intelligent machines” (Peart, 2018) in the topic of the Dartmouth Conference, the first conference to be devoted to the subject. This conference indicated the beginning of A.I research. Top scientists debated how to tackle A.I; cognitive scientist Marvin Minsky, dominated with his top-down approach, being to pre-program a computer with the rules that govern human behaviour. Minsky and McCarthy then won substantial funding from the US government, who hoped that A.I might give them the upper hand in the Cold War. 

Considered by many as the first successful A.I program was LISP. Dating back to 1958, LISP was originally created as a practical mathematical notation for computer programs, but it quickly become the favoured programming language for artificial intelligence research. Although, LISP had critical influence far beyond A.I in the theory and design of languages, including all functional programming languages as well as object-orientated languages, with Java being one of those that we still use today. Another initial well-known development of A.I is the General Problem Solver (GPS) that was capable of solving any array of problems that challenged human intelligence, but more importantly, it solved these problems by stimulating the way a human being would solve them. 

Come 1969, A.I was lagging far behind the predictions made by advocates even though the first general-purpose mobile robot, named Shakey, was able to make decisions about its own actions by reasoning about its surroundings. Although Shakey was clever; by building a spatial map of what it saw before moving, the bot was painfully slow — a moving object in its view could easily bewilder it, sometimes stopping for an hour while planning his next move. 

By the early 1970s, millions had been spent on A.I, with little to show for it. A.I was in trouble, the Science Research Council of Britain commissioned Professor Sir James Lighthill to review the state of affairs within the A.I field. The council were concerned due to not seeing much in return for their funding and wanted to know if it was advisable to continue. Lighthill reported “In no part of the field have the discoveries made so far produced the major impact that was promised.” Which is true, Turing was even promised that machines would be able to pass his test by 2000, back in the 50s and other A.I researchers were making promises to build all-purpose intelligent machines on a human-scale knowledge base by the 80s, however the 70s was a big realisation that the problem domain for intelligent machines had to be sufficiently restricted which is a development within itself really. 

It was then the 80s and what did we get? An Expert system; a big step for Artificial Intelligence. In A.I, an expert system is a computer system that emulates the decision-making of a human expert, it’s simply a computer software that attempts to act like a human expert on a particular subject area. The first successful commercial expert system began operation at the Digital Equipment Corporation helping configure orders for new computer systems — by 1986 it was saving the company an estimated $40 million a year.

At the beginning of the 90s, Rodney Brooks, roboticist published a paper: Elephants Don’t Play Chess. Brooks argued that the top-down approach was wrong and that the bottom-up approach was more effective. The bottom-up strategy, also known as behaviour-based robotics, is a style of robotics in which robots are programmed with many independent behaviours that are coupled together to produce coordinated action. This paper helped drive a revival for the bottom-up approach, however that doesn’t mean supporters of top-down A.I weren’t going to succeed too. In 1997, IBM’s chess computer, Deep Blue, shocked the world of chess and many in computer science by defeating Garry Kasparov in a six-game match. Capable of imagining an average of 200,000,000 positions per second, it was a belief that chess could serve as the ultimate test for machine intelligence as Martin Ford said ‘computers are machines that con – in a very limited and specialised sense — think’ (Ford, 2017). Although this was a revolutionary moment for A.I, it did trigger alarmist fears of an era when machines will take over, excel in human mental processes and render us redundant. 

Rodney Brooks company, iRobot, created the first commercially successful robot for home in 2002 — an autonomous vacuum cleaner. Selling around 1 million annually and still being around today, Roomba combines a powerful cleaning system with intelligent sensors to move seamlessly through homes, adapting to the surroundings to thoroughly vacuum your floors.