Search
Search
Generative AI: A Threat Assessment
Generative AI: A Threat Assessment

Date

author

share

Generative AI like ChatGPT is a threat but not necessarily for the reasons you might think. . . .

With tools like ChatGPT and its analogues hitting the scene, I’ve read a lot of articles asking whether such tools are a threat. As stated, there is no clear answer to the question because the question is ambiguous. Journalists and technologists discuss the threat of AI ad nauseum. Threats range from merely creating a more ignorant population (“People don’t know how to write anymore!”) to world domination and everything in between.

It’s possible (if that’s what interests people) to assess each threat independently. I think it’s more helpful and probably more efficient to look at generative AI taking a first principles approach by looking at what the technology is and what it does. With that foundation, answering questions about threats can be a little easier.

Generative AI

What is “generative AI”? I asked ChatGPT and here’s what it told me:

Generative AI refers to a type of artificial intelligence that is designed to create or generate new content, such as images, music, or text, that is original and unique. This type of AI uses machine learning algorithms to learn from large sets of data and then generate new content based on that learning.

That’s a pretty good definition but it conveys a type of creativity that many (including me) aren’t willing to grant. Generative AI does generate original content (thus the name). How it does this is another matter.

When I was at Microsoft, I was on a team that was awarded a patent for creating a tool that could evaluate code samples that learners on our digital learning platform entered as answers to exam questions. It was an AI natural language processing (NLP) tool and was, at the time, innovative.

Natural language processing or computational linguistics is the discipline within artificial intelligence that is focused on processing and generating natural languages (like English or Spanish). While computer code isn’t a natural language in the strict sense because natural language processing is rule based, the systems used to process a natural language can be used on computer code as well.

How computers learn

In working with the engineers on the project, I learned a lot about how NLP systems “learn” and then analyze and process languages. As computer algorithms, these systems consume a large (the larger the better) corpus of content in the same general category (for example narrative text or computer code). The systems then build models based on patterns.

Take the following sentences:

  1. I took an epistemology course in college that studied what knowledge is.
  2. I love epistemology because I want to better understand what knowledge is and how it’s used
  3. When a person studies the possibility, limits, and scope of knowledge, they’re within the discipline called epistemology.

An NLP system might analyze these sentences found anywhere on the internet and build a statistical model based on them. The words, “knowledge” “epistemology” “study” and “understand” all fall in a certain proximity to each other.

The more data (sentences) the NLP system can consume, the more it can build a map of how often those words are found together and in what grammatical construction. This is, very roughly, how an NLP system “learns.” Of course, it’s much more involved than this but it gives you a general idea of how it works.

After it consumes a certain amount of data, you can ask a tool that has done enough learning, “What is epistemology?” It may come back with this phrase: “Epistemology is the study of knowledge.” The system was able to analyze your text as a question and the model, based on the statistical analysis it has done, can provide you with a natural language response.

Dennett or Searle?

Generative AI doesn’t sound much like a threat when you think of it in those terms. Why, then, are people worried about it? For example, a thinktank authored an “open letter” calling for a pause in the development of generative AI tools like ChatGPT. They call out the following potential dangers of such a tool:

  • this technology will flood our information channels with propaganda and untruth
  • this technology will automate away all the jobs, including the fulfilling ones
  • these nonhuman minds might eventually outnumber, outsmart, obsolete and replace us [humans] 
  • this technology risks loss of control of our civilization

These are dramatic and surprising outcomes for a technology that appears merely to be analyzing words found on the internet and repeating them back in a unique way. There is something else going on and I believe it has to do with the perspective claimed in the third bullet above. The claim is that generative AI, if allowed to be developed further, is the creation of a mind. Not a human mind, but a mind that is like a human one but much more powerful and possibly nefarious.

Is generative AI a type of mind?

Answering the question of whether generative AI technology is a type of mind is a complex question involving many disciplines. The first complexity is defining just what a mind is. To keep things simple, let’s establish that (at least) human minds have the following: reason, desire, feelings, the ability to reflect, and the ability to motivate action. I’ll put all these abilities under the category of what it means to “be conscious.”

The worry in the open letter, which has been signed by many prominent people in a variety of disciplines, is that generative AI can become conscious in the way humans are conscious but without all our annoying limitations like having to sleep, process information inefficiently and slowly, and get old and die.

The underlying assumption here is that what generative AI technology, like ChatGPT, is doing is what a human mind does in all the important ways. If allowed to continue to develop the technology will become a super intelligence overpowering human intelligence. So, this question of whether generative AI is a type of mind is kind of an important one.

There are many nuanced stances one can take on this question. I’ll consider two that stand at the poles of the various positions.

Position 1: It’s a mind

To hold this position, one need not be committed to the idea that artificial intelligence is produced in a microchip in the same way human intelligence is produced in a brain. What is relevant though is that you hold that the two types of minds produce similar outcomes. I think Daniel Dennett holds this view most clearly.

As far as I can tell, Dennett’s view is that the mind is “just” what objective science (cognitive neuroscience specifically) determines it is. I put just in scare quotes not because the answer is trivial or simple but to clarify that the mind isn’t anything beyond what science determines it to be. He argues for this position in his book Consciousness Explained.

The important aspect of this position is that a determination of what makes a mind is made from the outside, objectively. (The famous–or infamous–Turing Test was an early and admittedly crude objective way to test for intelligence.) Mind isn’t defined by what it’s like to have one but by explaining how things with minds came to have them (the evolutionary process of the mind’s development), the behavior of those that have them, the structure and makeup of the brain or computer chip, and, most importantly for this conversation, the results of what the mind produces–the output.

On this approach, if computer systems can be developed that are adequately complex and produce outputs similar to other things with minds (humans and other animals), then it can be said to have a mind in all the ways that are important. So, if computers can answer 69 out of 70 questions correctly on an AP biology exam, beat the reigning chess grandmaster, or outsmart the top contestants at Jeopardy!, they’re thinking and producing outputs in a way consistent with the way other things with minds think. As these computational minds get more advanced, they’ll be the most powerful minds the planet has ever seen.

Alarming indeed.

Position 2: It’s not a mind

The other position argues that a computer and a human mind do what they do in completely diverse ways even if the outputs are identical. When a human and a computer answer questions, play chess, and generate text, they may produce similar output but the ways each produce that output are completely different. Humans do those things through conscious acts and the computer is processing data algorithmically. Philosopher John Searle has argued the most ardently for this position.

Searle argues that computer intelligence is in a completely different category from human intelligence simply based on the way computers process information. His most widely known and most polarizing argument he’s used to make is point is known as the Chinese Room argument.

People that hold this position believe that machines that process data algorithmically have no internal states. They process data according to rules (complex Turing Machines) but, without consciousness, they can’t understand what those rules mean and lack other aspects of conscious states like desires and will. Humans, on the other hand, have a complex “inner life” full of emotions, will, desires, and reason.

This isn’t to claim that such machines are innocuous. A “mindless” machine like this can still do a lot of damage if it processes data in ways not intended by humans and produce damaging effects. A haywire computer hooked up to the power grid or a nuclear facility could cause a lot of damage indeed. In fact, without conscious judgement, one could argue that the output from a mindless computer could be a lot more damaging than the judgement from a reasonable person that possesses conscious states like sympathy and self-preservation

Even so, this damage is not caused by a mind that decided to take over the world but by a poorly programmed computer doing what it was told to do. Garbage in, garbage out.

What’s the difference?

These two views represent a dramatic difference in perspective on whether computers have or even can have minds in ways that matter (This exchange in The New York Review illustrates how dramatically different these views are).

Still, for many in the technical community and perhaps the population at large, however, if the results of artificial intelligence are catastrophic, what difference does it make whether the computer has a mind or not? If a computer or a human is tasked with making a judgement on whether a suspect is guilty or innocent and either one makes the wrong call, it may not much matter what computational or mental processing produced the decision. It was the wrong decision, and that end result is all that matters.

But this shifts the burden of responsibility to how the technology is managed, not to whether the technology itself should be developed. People or groups of people that gain a lot of power–malevolent dictators, wayward governments, greedy banks, powerful corporations–have caused the loss of millions of lives, brought the world to the brink of nuclear war, and wreaked havoc on the planet all without the help of artificially intelligent computers.

Checks and balances are put in place in governments to prevent this from happening. Laws are created to keep people motivated to stay within boundaries that make life livable for everyone. Even some religions try to motivate people to act responsibly towards other humans and be good stewards of the planet.

So, the answer to the question whether generative AI is a threat is a resounding yes. Like any powerful technology, person, or group of people, left unmanaged, it can do a tremendous amount of damage. Or worse, these powerful tools in the hands of a malevolent people that use them intentionally to cause damage is the stuff of nightmares.

What the threat of generative AI lacks is novelty. This technology isn’t an alien force that transcends comprehension and our ability to control it. It offers many benefits to humanity. But it also suffers from all the flaws of the people that create it.

And that’s a threat we’ve not only seen before but one which, with persistence and patience, can be managed.

More
articles

More
news

What is Disagreement?

What is Disagreement?

This is Part 1 of a 4-part series on the academic, and specifically philosophical study of disagreement. In this series...

At the Cusp of a New Epoch

At the Cusp of a New Epoch

Back in 2005, Ray Kurzweil announced to the world that we humans were hurtling toward the technological Singularity and that,...