A brief history of AI: how we got here and where we are going (2024)

A brief history of AI: how we got here and where we are going (1)

With the current buzz around artificial intelligence (AI), it would be easy to assume that it is a recent innovation. In fact, AI has been around in one form or another for more than 70 years. To understand the current generation of AI tools and where they might lead, it is helpful to understand how we got here.

Each generation of AI tools can be seen as an improvement on those that went before, but none of the tools are headed toward consciousness.

The mathematician and computing pioneer Alan Turing published an article in 1950 with the opening sentence: "I propose to consider the question, 'Can machines think?'". He goes on to propose something called the imitation game, now commonly called the Turing test, in which a machine is considered intelligent if it cannot be distinguished from a human in a blind conversation.

Five years later, came the first published use of the phrase "artificial intelligence" in a proposal for the Dartmouth Summer Research Project on Artificial Intelligence.

From those early beginnings, a branch of AI that became known as expert systems was developed from the 1960s onward. Those systems were designed to capture human expertise in specialized domains. They used explicit representations of knowledge and are, therefore, an example of what's called symbolic AI.

There were many well-publicised early successes, including systems for identifying organic molecules, diagnosing blood infections, and prospecting for minerals. One of the most eye-catching examples was a system called R1 that, in 1982, was reportedly saving the Digital Equipment Corporation US$25m per annum by designing efficient configurations of its minicomputer systems.

The key benefit of expert systems was that a subject specialist without any coding expertise could, in principle, build and maintain the computer's knowledge base. A software component known as the inference engine then applied that knowledge to solve new problems within the subject domain, with a trail of evidence providing a form of explanation.

These were all the rage in the 1980s, with organizations clamoring to build their own expert systems, and they remain a useful part of AI today.

Enter machine learning

The human brain contains around 100 billion nerve cells, or neurons, interconnected by a dendritic (branching) structure. So, while expert systems aimed to model human knowledge, a separate field known as connectionism was also emerging that aimed to model the human brain in a more literal way. In 1943, two researchers called Warren McCulloch and Walter Pitts had produced a mathematical model for neurons, whereby each one would produce a binary output depending on its inputs.

One of the earliest computer implementations of connected neurons was developed by Bernard Widrow and Ted Hoff in 1960. Such developments were interesting, but they were of limited practical use until the development of a learning algorithm for a software model called the multi-layered perceptron (MLP) in 1986.

The MLP is an arrangement of typically three or four layers of simple simulated neurons, where each layer is fully interconnected with the next. The learning algorithm for the MLP was a breakthrough. It enabled the first practical tool that could learn from a set of examples (the training data) and then generalize so that it could classify previously unseen input data (the testing data).

It achieved this feat by attaching numerical weightings on the connections between neurons and adjusting them to get the best classification with the training data, before being deployed to classify previously unseen examples.

The MLP could handle a wide range of practical applications, provided the data was presented in a format that it could use. A classic example was the recognition of handwritten characters, but only if the images were pre-processed to pick out the key features.

Newer AI models

Following the success of the MLP, numerous alternative forms of neural network began to emerge. An important one was the convolutional neural network (CNN) in 1998, which was similar to an MLP apart from its additional layers of neurons for identifying the key features of an image, thereby removing the need for pre-processing.

Both the MLP and the CNN were discriminative models, meaning that they could make a decision, typically classifying their inputs to produce an interpretation, diagnosis, prediction, or recommendation. Meanwhile, other neural network models were being developed that were generative, meaning that they could create something new, after being trained on large numbers of prior examples.

Generative neural networks could produce text, images, or music, as well as generate new sequences to assist in scientific discoveries.

Two models of generative neural network have stood out: generative-adversarial networks (GANs) and transformer networks. GANs achieve good results because they are partly "adversarial", which can be thought of as a built-in critic that demands improved quality from the "generative" component.

Transformer networks have come to prominence through models such as GPT4 (Generative Pre-trained Transformer 4) and its text-based version, ChatGPT. These large-language models (LLMs) have been trained on enormous datasets, drawn from the Internet. Human feedback improves their performance further still through so-called reinforcement learning.

As well as producing an impressive generative capability, the vast training set has meant that such networks are no longer limited to specialized narrow domains like their predecessors, but they are now generalized to cover any topic.

Where is AI going?

The capabilities of LLMs have led to dire predictions of AI taking over the world. Such scaremongering is unjustified, in my view. Although current models are evidently more powerful than their predecessors, the trajectory remains firmly toward greater capacity, reliability and accuracy, rather than toward any form of consciousness.

As Professor Michael Wooldridge remarked in his evidence to the UK Parliament's House of Lords in 2017, "the Hollywood dream of conscious machines is not imminent, and indeed I see no path taking us there". Seven years later, his assessment still holds true.

There are many positive and exciting potential applications for AI, but a look at the history shows that machine learning is not the only tool. Symbolic AI still has a role, as it allows known facts, understanding, and human perspectives to be incorporated.

A driverless car, for example, can be provided with the rules of the road rather than learning them by example. A medical diagnosis system can be checked against medical knowledge to provide verification and explanation of the outputs from a machine learning system.

Societal knowledge can be applied to filter out offensive or biased outputs. The future is bright, and it will involve the use of a range of AI techniques, including some that have been around for many years.

Provided by The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.A brief history of AI: how we got here and where we are going (2)

Citation: A brief history of AI: how we got here and where we are going (2024, July 1) retrieved 1 July 2024 from https://techxplore.com/news/2024-07-history-ai.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

A brief history of AI: how we got here and where we are going (2024)

FAQs

What is the brief history of AI? ›

John McCarthy coined the term "artificial intelligence" in 1956 and drove the development of the first AI programming language, LISP, in the 1960s. Early AI systems were rule-centric, which led to the development of more complex systems in the 1970s and 1980s, along with a boost in funding.

How did we get to AI? ›

The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning.

How we interact with AI today and the impact that AI has on our lives? ›

One of such examples that we use in our daily lives is voice assistants on our mobile phones. Voice assistants are the gift of AI. Other than this, AI has brought us things like self-driving cars, humanoid robots, and reusable rockets.

What is the summary of AI for everyone? ›

In this course, you will learn: - The meaning behind common AI terminology, including neural networks, machine learning, deep learning, and data science - What AI realistically can--and cannot--do - How to spot opportunities to apply AI to problems in your own organization - What it feels like to build machine learning ...

What is the brief idea about AI? ›

AI technology is improving enterprise performance and productivity by automating processes or tasks that once required human power. AI can also make sense of data on a scale that no human ever could. That capability can return substantial business benefits.

Is AI good or bad? ›

AI is neither inherently good nor bad. It is a tool that can be used for both beneficial and harmful purposes, depending on how it is developed and used. It is important to approach AI with caution and responsibility, ensuring that it is developed and used in an ethical and transparent manner.

Is Siri an AI? ›

Siri is Apple's virtual assistant for iOS, macOS, tvOS and watchOS devices that uses voice recognition and is powered by artificial intelligence (AI).

Who is the creator of AI? ›

John McCarthy is considered as the father of Artificial Intelligence. John McCarthy was an American computer scientist. The term "artificial intelligence" was coined by him. He is one of the founder of artificial intelligence, together with Alan Turing, Marvin Minsky, Allen Newell, and Herbert A.

Are there humans behind AI? ›

Technologies heralded as automating away dull or dangerous work may still need humans in the loop, or as one Bloomberg columnist put it, AI software “often requires armies of human babysitters.” It's hardly a new phenomenon.

How has AI changed human life? ›

Efficiency and productivity: AI can automate repetitive and tedious tasks, such as data entry, customer service, and accounting, and perform them faster and more accurately than humans. This can save time, money, and resources, and improve the quality and consistency of the output.

How is AI going to affect us? ›

Research shows that AI can help less experienced workers enhance their productivity more quickly. Younger workers may find it easier to exploit opportunities, while older workers could struggle to adapt. The effect on labor income will largely depend on the extent to which AI will complement high-income workers.

What is an artificial intelligence short essay? ›

Artificial Intelligence is the intelligence possessed by the machines under which they can perform various functions with human help. With the help of A.I, machines will be able to learn, solve problems, plan things, think, etc. Artificial Intelligence, for example, is the simulation of human intelligence by machines.

What is a short summary of AI? ›

The term artificial intelligence broadly refers to applications of technology to perform tasks that resemble human cognitive function and is generally defined as “[t]he capability of a machine to imitate intelligent human behavior.”6 AI typically involves “[t]he theory and development of computer systems able to ...

What is AI for essay summary? ›

What is the Semrush AI summarizing tool? Our AI-based summary generator tool provides a quick and concise summary of your text in just a few seconds. It works effectively with any form of lengthy text, such as articles and paragraphs.

How does AI impact society as a whole? ›

AI could help society to improve healthcare, education, facilitate access to information and significantly improve the efficiency of our workplaces. It can take over dangerous or repetitive tasks and make working environments safer.

What is the history of general artificial intelligence? ›

Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."

What is the summary of artificial intelligence? ›

Artificial Intelligence (AI) is an evolving technology that tries to simulate human intelligence using machines. AI encompasses various subfields, including machine learning (ML) and deep learning, which allow systems to learn and adapt in novel ways from training data.

Who first invented AI? ›

Turing is considered the “father of AI” due in part to his work introducing the Turing Test in 1950. This test provides a theoretical means of discerning a human from AI, through a series of questions centered around the issue of whether a machine can think.

What is the history of the AI Act? ›

Originally proposed by the European Commission ("Commission") in April 2021,[1] the General Approach Policy was adopted by the European Council ("Council") last year,[2] and on 14 June 2023, Members of the European Parliament ("MEPs") adopted the Parliament's negotiating position on the AI Act.

Top Articles
Latest Posts
Article information

Author: Francesca Jacobs Ret

Last Updated:

Views: 6412

Rating: 4.8 / 5 (48 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Francesca Jacobs Ret

Birthday: 1996-12-09

Address: Apt. 141 1406 Mitch Summit, New Teganshire, UT 82655-0699

Phone: +2296092334654

Job: Technology Architect

Hobby: Snowboarding, Scouting, Foreign language learning, Dowsing, Baton twirling, Sculpting, Cabaret

Introduction: My name is Francesca Jacobs Ret, I am a innocent, super, beautiful, charming, lucky, gentle, clever person who loves writing and wants to share my knowledge and understanding with you.