Artificial intelligence research has accelerated in recent years, and people are interacting with AI more frequently. In-home AI assistants and self-driving cars were once things out of science fiction but are now becoming a reality.
Some researchers and activists wonder if AI is nearing the point of sentience, the ability to think and feel on the same level as humans. Some worry that sentient AI could overtake humanity, while others are concerned about subjugating an intelligent life form to do our bidding.
So, how will we know whether an AI has become sentient? We’ll break down the history of AI, where AI research is now, and how, or even if, we can determine if AI has crossed the border into sentience.
- Artificial intelligence research began in the mid-1950s with the search for artificial general intelligence
- Today, a large part of AI research focuses on specific tasks rather than general intelligence
- Given our current understanding of consciousness, determining if an AI is sentient may be all but impossible
A Brief History Of Artificial Intelligence
Artificial intelligence has a long history, with early formulations in stories told millennia ago. Greek mythology speaks of Talos, a giant bronze statue that guarded the island of Crete and circled the island’s shores three times a day. Though the Greeks obviously wouldn’t have described Talos using the language we have to describe AI today, it’s fascinating how long humans have thought about the line between man and machine.
However, only recently has AI become something humans can study and develop. Many experts point to 1956 as the year when AI research officially started. The Dartmouth Summer Research Project on Artificial Intelligence occurred that year.
Over its eight-week span, approximately twenty attendees met and worked together to discuss and envision programs that could demonstrate learning capabilities. The Dartmouth Summer Research Project is often marked as an initial jumping-off point for modern developments in AI, even if the eight-week event acted more as a brainstorming session. Programs created in the subsequent years learned strategies for checkers, how to speak English, and how to solve word problems.
The U.S. Department of Defense began funding AI research heavily in the 1960s. Some researchers, such as Herbert A. Simon, claimed that within twenty years, AI could do any work a human could do. However, this prediction did not bear fruit, mainly due to the limitations of computer storage, and funding dwindled in the mid-1970s. Research funding returned in the 1980s but crashed again in the latter half of the decade.
The 1990s saw a second resurgence of research, this time focused on more specialized and focused AI designed to solve specific problems. This allowed researchers to demonstrate success more easily as their AIs achieved tangible results in the fields of economics and statistics.
The increasing speed of computers, combined with the Internet and access to big data, allowed for further advances in machine learning by the early 2010s. By 2015, Google was using AI in more than 2,700 projects.
The Current Landscape
Today, AI research looks pretty different than it did in its early years. Early research often focused on artificial general intelligence. People imagine this type of AI as human-like, capable of learning any task a human can. If you read or watch science fiction media, this type of AI tends to be featured.
Instead, many of today’s AI researchers focus on producing artificial intelligence to accomplish specific tasks. For example, deep learning is a form of machine learning that relies on large quantities of data and can imitate how humans gain knowledge. People and businesses can use deep learning for purposes like speech or image recognition, recommendation systems, creating art, advertising, investing, fraud detection, and more.
Artificial general intelligence research is now frequently treated as a separate topic from AI designed for specific tasks.
Current AI Products
If you’ve been near a TV in the last few months, you’ve probably heard of Open AI’s ChatGPT. This chatbot can take questions you ask and give you direct responses. This is a more streamlined approach to searching for information online, as the chatbot gives you an answer right away rather than a list of websites that might give you conflicting information.
OpenAI has not yet developed ChatGPT enough to replace journalists and people who write for a living. However, the technology has enormous potential and will fundamentally alter many disparate fields.
You may have seen many people using another AI product last year called Lensa. Users could upload photos to the Lensa app and – for a fee – receive slightly touched-up, animated pictures of themselves to use as a profile picture for Instagram or Twitter. Though this is a pretty vain use for AI, it shows how ubiquitous it’s becoming.
There are also many companies using AI for much more practical purposes. Retailers can use AI to learn where their supply chain is weak or demand is low and adjust accordingly. Insurance companies can use AI to identify cases at risk of escalation and offer potential solutions to avoid further conflict. Customer service jobs may be replaced over time by AI bots.
Some automated investing platforms have started harnessing the power of AI to streamline investing for their users. With some apps, you can put money into a portfolio and have an AI move your investments around to maximize profits and defend against downturns. This is especially useful since keeping track of the news to decide what to invest in can be time-consuming.
The Limits Of Intelligence Testing And The Turing Test
One of the big problems with knowing when artificial general intelligence has gained sentience is that intelligence testing is incredibly limited.
In 1948, English mathematician, computer scientist, and philosopher Alan Turing proposed the Turing test. This is a rudimentary method of determining whether an AI is intelligent.
The test requires two humans and one AI. One human, the interviewer, holds a conversation with two subjects, one human and one AI. If the interviewer cannot determine which is human and which is AI, meaning the AI consistently fools the interviewer into believing it is human, then the AI is intelligent.
Most experts today agree that this test is ineffective at determining machine intelligence.
Another proposed method of gauging sentience is a General Language Understanding Evaluation (GLUE). The GLUE is like the SATs for AI, asking programs to answer English-language questions based on datasets of varying sizes.
However, even the GLUE benchmark and similar tests have limits. Many would argue that animals like cats and dogs can think and feel, the basic requirements for sentience. However, how many pet dogs could manage to pass a multiple-choice exam?
Also, with new developments like ChatGPT demonstrating natural language processing (NLP) capabilities, it’s clear that some AI programs can process language. Still, most people would agree that it is not the same as achieving sentience.
How Will We Know If An AI Is Sentient?
Given the limitations of current tests for determining sentience, how will we ultimately know if a machine has gained the capability to both think and feel?
The truth is, it will be difficult and may not be possible given our current understanding of consciousness. There is no consensus on accurately determining if an AI is conscious.
Research on tests that can prove sentience, as well as the science of consciousness itself, continues. Future advances may provide us with answers we can use to define and test for sentience more definitively.
Will AI ever be sentient?
Another topic to consider is whether it is even possible for artificial intelligence to gain sentience. Sentient AI is a popular topic in science fiction, but could it ever become a reality?
Experts have taken mixed positions on this topic. An ex-Google engineer, Blake Lemoine, claimed that AI had already achieved sentience through the Language Model for Dialogue Applications (LaMDA) chat program. In a conversation with the program, Lemoine claimed the program felt sadness after reading Les Misérables and feared death.
Google argued these claims were wholly unfounded and fired Lemoine last year.
On the other hand, Associate Professor John Basl of Northeastern University’s College of Social Sciences and Humanities, who researches the ethics of emerging technology, believes, “Reactions like ‘We have created sentient AI’, are extremely overblown.”
In an article for Northeastern, Basl elaborates that he expects that if an AI ever gains sentience, it would only be minimally conscious. It might be aware of what is happening and have basic positive or negative feelings, similar to a dog who “doesn’t prefer the world to be one way rather than the other in any deep sense, but clearly prefers her biscuits to kibble.”
Researchers who believe in the possibility of AI sentience also debate whether pursuing it is a good idea. It’s not difficult to find people speculating about different worst-case scenarios in which nefarious actors produce millions or billions of bots to push destructive political agendas on us. Anyone who has seen The Matrix is familiar with media in which AI-enhanced machines turn on humans and ultimately replace us as the dominant life form.
Whether or not that’s nonsense or a potential future reality remains to be seen. Technology has evolved a lot in the last decade, and it’s difficult to say where it will be in the next.
The Bottom Line
Artificial intelligence is an exciting field that humans have been interested in, in one form or another, since antiquity. However, it has only become a field that people interact with daily in the past few decades.
Though there are many questions surrounding the field, there’s no hiding the fact that AI can accomplish many complicated tasks. Plenty of companies, from insurance providers to retailers, have started using AI to optimize their work.