10 Best Milestones of AI

Artificial Intelligence (AI) is the hottest buzz in technology right now, and it’s the driving force behind most of the recent technological advancements.

Indeed, with all of the frenzied hoopla around AI today, it’s easy to forget that technology isn’t all that new. It has progressed from the realm of science fiction to the real world throughout the previous century. It has been conceivable for decades because of the theory and fundamental computer science that makes it possible.

Scientists and engineers have known since the beginning of computing in the early twentieth century that the ultimate goal is to create robots capable of thinking and learning in the same way as the human brain — the most powerful decision-making system in the known universe – does.

The present state-of-the-art in deep learning is artificial neural networks, although there have been numerous milestones along the way that have made it feasible. Let’s see some of the top milestones of AI.

Origin of Artificial Intelligence

The phrase “artificial intelligence” was coined at a Dartmouth College summer workshop led by professor John McCarthy. Experts in the domains of machine learning and neural networks gathered for the event to generate new ideas and debate how to approach AI.

Attempts were made at the discussion to generate fresh ideas to layout a structure that would allow research and development of “thinking” robots to begin.

A component of motivation came from a variety of domains that are important to today’s cutting-edge AI, such as natural language processing, computer vision, and neural systems.

AI Before the Term was Coined — Early 20th Century

The thoughts that would eventually lead to AI began in the minds of science fiction writers and scientists in the early twentieth century.

Metropolis, a science fiction picture starring an artificially intelligent robot, was premiered in 1927, and I Robot, a visionary collection of short tales by Isacc Asimov, was published in 1950. Asimov envisioned the Three Laws of Robotics, as well as a computer that could answer questions by storing human knowledge.

In their 1943 publication “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Warren McCulloch and Walter Pitts proposed that logical functions could be performed by networks of artificial neurons, which are now known as artificial neural networks (ANNs).

In 1943, Alan Turing, the inventor of the Turing test, which would essentially let mankind determine when machines acquired intelligence, and neurologist William Grey Walter, the creator of the first autonomous robots known as “tortoises,” began working together on intelligent machines.

1943 – The first ANN

Warren McCulloch and Walter Pitts’ A Logical Calculus of the Ideas Immanent in Nervous Activity was published in 1943. In it, the two discuss the potential of logical functions being performed by networks of artificial neurons.

Artificial neural networks (ANNs) work now in the same way that McCulloch and Pitts envisioned them in the 1940s. Modern AI functionality relies on ANNs to enable ‘learning.’ As a result, one of the first key achievements in artificial intelligence was its debut.

1948 – First Autonomous Robots

Elmer and Elsie, designed by William Grey Walter in 1948, were another great AI milestone. These two robots were the first to work on their own, without the assistance of a human. They could use light and touch to navigate around obstacles.

1955 – Official term and academic recognition

The invention of the phrase artificial intelligence is maybe one of the most significant breakthroughs in the field. In 1955, computer scientist and AI “founding father” John McCarthy invented the phrase “artificial intelligence.”

The Dartmouth University meeting, which took place in the summer of 1956, was his next project. This conference was the first to recognize AI as a legitimate academic topic of study.

1956 – Dartmouth Conference

The phrase “artificial intelligence” was coined at a Dartmouth College summer workshop led by professor John McCarthy. Experts in the domains of machine learning and neural networks gathered for the event to generate new ideas and debate how to approach AI. The agenda of that summer event included neural networks, computer vision, natural language processing, and more.

1964 – The First Chatbot- ELIZA

There was ELIZA, the world’s first chatbot before Alexa and Siri became a figment of their creators’ ideas. ELIZA was developed by Joseph Weizenbaum at MIT as an early application of natural language processing. She couldn’t speak, so she communicated through text.

ELIZA spoke to an early implementation of routine language preparation, which entails instructing computers to converse with us in human language.

It’s not as if they’re asking us to program them in PC code or cooperate via a user interface. ELIZA couldn’t speak like Alexa, so she had to rely on the material to communicate. She lacked the necessary skills to benefit from her interactions with others. She was ready for future attempts to separate the communication barrier between people and robots.

1969 – Backpropagation

Backpropagation was first used in 1969.

Backpropagation is now an important aspect of AI. It basically enables a neural network to learn from its errors. While this may not sound exciting, it means that AI may be taught to improve its performance over time. As a result, AI can grow increasingly adept at making decisions.

Backpropagation is thus another of artificial intelligence’s watershed moments. The concept was born in 1969 and gained popularity in 1986.

1970 – First ‘intelligent’ robot

Though autonomous robots had existed for decades by 1970, it wasn’t until the creation of ‘Shakey’ that a robot was able to reason through its actions.

Shakey the Robot, unlike its predecessors, did not require instruction on each individual step of a difficult procedure. Rather, it could dissect and analyze commands. As a result, Shakey is a significant AI milestone: the first physical robot controlled by AI.

1978 – Voice-activated technology

If you were a kid in the late 1970s or early 1980s, you were probably familiar with the Speak & Spell. This language learning game was able to teach both the right spelling and pronunciation of a word by being able to “talk” to children.

This was also the first time the human vocal tract was duplicated electronically on a single silicon chip. As a result, we were better prepared for something that has just recently become popular: voice-activated technology.

1980 – The Formative Stages (XCON)

By the late 1960s, the allure of AI had worn off a little, despite the fact that millions of dollars had been invested and the field was still falling short of expectations. By the 1980s, however, the “AI winter” had passed. From 1980 through 1986, Digital Equipment Corporation’s XCON expert learning system was credited with saving the corporation $40 million per year.

This was a watershed moment for AI since it demonstrated that it wasn’t only a cool technological achievement but also had practical implications. These apps were more focused and used AI to solve a specific problem. Businesses began to recognize the impact AI may have on their operations and the potential for cost savings. In fact, firms invested $1 billion per year in AI systems in 1985 as a result of it.

1981 – Commercialised AI

‘Expert systems’ rose and fell in popularity in the early 1980s. As the name implies, this is a computer system that can make decisions similar to a human expert.

Expert systems promised enterprises complicated problem-solving and were widely commercially viable. During their heyday, two-thirds of Fortune 500 corporations used expert systems. However, because they didn’t quite live up to the hype, their time was limited.

Statistical Methodology In 1988

A Statistical Approach to Language Translation is published by IBM researchers, incorporating probability ideas into the previously rule-driven field of machine learning. It addressed the problem of human-to-human translation between French and English.

This signaled a shift in emphasis from training programs to determine rules to creating programs to calculate the probability of various outcomes based on information (data) they are taught on. This is frequently regarded as a significant step forward in terms of simulating the cognitive processes of the human brain, and it is the foundation of machine learning as we know it today.

1998 – Furby and machine learning

Furby was released just in time for Christmas in 1998. In the first three years after its publication, more than 40 million units were sold. But what makes Furby a watershed moment in AI?

Furbies appeared to have false intelligence, learning a language over time. They popularized the concept of machines that could learn and converse with humans. In other words, they put a hazy face on the idea of AI invading our homes.

2001 – A.I. Artificial Intelligence

A.I., directed by Stephen Spielberg, was released in 2001. David, a humanoid boy robot who can love and exhibit feeling like a real child, is the protagonist of the film.

A.I. is unique in that it regularly switches to David’s perspective, despite the fact that sci-fi depictions of artificial intelligence in movies were nothing new by 2001. David yearns for love and belonging more than he yearns for his own success. As a result, the film prompted sympathetic thoughts on how artificial intelligence might integrate into the world around us.

2005 – 5 Autonomous Vehicles Complete the DARPA Grand Challenge

There were no autonomous cars that finished the 100-kilometer off-road route through the Mojave desert when the DARPA Grand Challenge was initially conducted in 2004. Five automobiles made it in 2005! This competition aided in the advancement of self-driving technology.

The Internet’s Inception

This one can’t be stressed in terms of importance.

When CERN researcher Tim Berners-Lee introduced the hypertext transfer protocol (HTTP), the world’s first website online, in 1991, it made it feasible to share online connections and data no matter who or where you were. Because data is the fuel for artificial intelligence, Berners-work Lee’s is unquestionably responsible for the advancement of AI to where it is today.

AI Defeats Global Chess Champion

Another watershed moment for AI occurred when world chess champion Garry Kasparov was dethroned by IBM’s Deep Blue supercomputer in a chess match.

By today’s standards, IBM’s chess supercomputer did not deploy techniques that would be deemed real AI. Instead of monitoring games and learning about the game, it relied on “brute force” methods of quickly computing every conceivable choice. However, it was significant from a public relations standpoint, since it drew attention to the fact that computers were rapidly growing and becoming increasingly capable at tasks where humans had previously reigned supreme.

Deep Blue won by calculating every potential alternative utilizing its high-speed capabilities that were able to analyze 200 million positions per second.

AI Wins Jeopardy!

Watson, an IBM AI, took on human Jeopardy! players in 2011 and won the $1 million prize. This was noteworthy since the machine’s outstanding processing capacity had previously been employed in challenges against humans, such as the Kasparov chess match. Watson had to compete in a language-based, creative-thinking game on Jeopardy!

The idea of a computer beating humans at a language-based, creative-thinking game was inconceivable, similar to how chess was beaten through brute force.

2011 – Voice assistant

Siri, Apple’s voice-controlled virtual assistant, was also released in 2011. Siri is still regarded as one of the most popular examples of artificial intelligence.

For ordinary users, Siri’s use of speech recognition and natural language processing (NLP) was nothing short of revolutionary. It’s also the first (advanced) version of a technology that’s now ubiquitous, with Alexa and Google in our homes and Siri in our pockets.

Showcasing Deep Learning

In 2012, AI learned to detect cat photos. Unsupervised AI learning was achieved in this collaboration between Stanford and Google, as reported in Jeff Dean and Andrew Ng’s work Building High-Level Features Using Large Scale Unsupervised Learning. Data had to be manually tagged before it could be used to train AI prior to this development.

An artificial network might be put on the task using unsupervised learning, as illustrated by the robots recognizing cats. The robots in this case analyzed 10 million unlabeled images from YouTube recordings to determine which images were cats. This ability to learn from unlabeled data has accelerated AI development and opened up a plethora of new possibilities for what machines could be able to assist us with in the future.

2015 – Machines “See” Better Than Humans

The annual ImageNet challenge in 2015 demonstrated that machines could recognize and describe a library of 1,000 photos better than humans. Image recognition has proven to be a difficult task for AI.

Since the competition began in 2010, the winning algorithm’s accuracy rate has increased from 71.8 percent to 97.3 percent, prompting researchers to declare that computers can better recognize objects in visual data than humans.

2016- AI Defeats Alphago World Champion

In 2016, AlphaGo, a Google subsidiary created by Deep Mind, defeated the world’s Go champion in five encounters. Although Go moves may be defined mathematically, the sheer amount of game permutations (nearly 100,000 possible starting moves in Go vs. 400 in Chess) makes a brute force approach impossible. AlphaGo employed neural networks to study and then learn while it played the game in order to win.

Autonomous Vehicles on the Road

Self-driving cars are a prominent use case for today’s VR — the application that has captivated the public’s imagination more than any other.2018 marked a watershed moment for autonomous vehicles, as Waymo’s self-driving taxi service in Phoenix, Arizona, put them on the road. And it wasn’t merely for the sake of testing.

Waymo One, the world’s first commercial autonomous car rental service, is presently in use by 400 people who pay to be driven to their schools and offices within a 100-square-mile radius.

While human operators must accompany each vehicle to monitor its performance and take control in the event of an emergency, this is clearly a huge step forward towards a vision in which self-driving cars are a reality for all of us.

2019 and Beyond

In the future years, we may expect to witness more research and refining of some of AI’s breakthrough technologies, such as self-driving vehicles on land, sea, air, and chatbots. We may anticipate to “talk” to even more algorithms in the future than we do now, thanks to AI’s expertise in natural language production and processing (and might even mistake them for humans).

As a result of the COVID crisis, new AI apps will emerge to assist more contactless delivery, cleaning, and other tasks. Of course, there may be applications that we haven’t yet considered.

Conclusion

In the future years, we may expect to witness more research and refining of some of AI’s breakthrough technologies, such as self-driving vehicles on land, sea, air, and chatbots. We may anticipate to “talk” to even more algorithms in the future than we do now, thanks to AI’s expertise in natural language production and processing (and might even mistake them for humans). As a result of the COVID crisis, new AI apps will emerge to assist more contactless delivery, cleaning, and other tasks.

You give me 15 seconds I promise you best tutorials
Please share your happy experience on Google | Facebook


1 Response

  1. Randy Amerine says:

    I am surprised there is no mention of the confluence of IoT and AI and how those align.

Leave a Reply

Your email address will not be published. Required fields are marked *