Search
share
Search
Generative AI Creates Content Just Like You Do

Date

author

share

It can be argued that generative AI tools like ChatGPT are creating content exactly like humans do. They just do it faster and more efficiently. This has enormous implications for how we should think about the moral and legal implications of the technology.

Wherever there can be a helping
hand, we can raise the question of just who is helping whom, what
is creator and what is creation. How should we deal with such
questions?

Daniel Dennett

With generative AI tools like ChatGPT and Bing Chat making a huge splash, there is a lot of talk about how the technology will change the job landscape and life on the planet in general. I wrote an article that surveys ways to think about any “threats” the technology may pose. The threat of generative AI is one, albeit important, consideration for the technology. Issues around copyright and how the tools can and should be used by industry and individuals is another.

Copyright and intellectual property considerations are complex and involve topics that range from ethics and law to how workers should be paid in the context of these tools being used to generate creative and information-based output. All the topics in this broad range will be studied and discussed in depth. Philosophy can help inform these studies. Here is a particularly interesting philosophical question, the answers to which provide a foundation for studying these topics: is there a difference in the way generative AI tools learn and create content and the way humans do it? If humans and AI tools process and create information and works of art similarly enough, perhaps our intuitions shouldn’t be as alarmist as they currently seem to be. The answers have implications for law and regulation as well.

The creative act

When we type a prompt into ChatGPT, we get a response in natural language that’s generated by algorithms that consumed a body of content from the internet from which the technology has “learned.” The word learned is in quotes because, in my opinion, the precise definition of that term is at the root of many of the questions that need to be explored. For example, some are investigating the question of whether generative AI tools violate intellectual property laws. This is an interesting and relevant topic to sort out. To answer this question, understanding what and how generative AI tools process information from the internet and then generate a response relative to how humans do it, seems essential.

How generative AI learns and responds

When a tool like Bing Chat (which uses the underlying technology behind ChatGPT) learns, it consumes a large (the larger the better) corpus of content in the same general category then builds models based on patterns. It generates a natural language response based on that data using complex algorithms that can put sentences together using the rules of a natural language like English.

For example, a generative AI tool may find content like this on various places on the internet:

  1. The best ice cream sundaes have tons of chocolate sauce, lots of whipped cream and a cherry on top.
  2. To make a sundae, start with two scoops of ice cream, drench the ice cream in chocolate sauce and spray whipped cream on top. Then top with a generous helping of sprinkles and chopped peanuts.
  3. My daughter loved her ice cream sundae. She had chocolate sauce all over her face at the end and ate so much whipped cream that she got a belly ache.

The tool analyzes these sentences and looks for patterns: words that appear next to each and in a certain grammatical construction. From sentences like these (and thousands of others), the tool creates a language model based on statistics.

Now suppose you create the following prompt in Bing Chat: “What is an ice cream sundae?” I did just that and got this response from the tool:

An ice cream sundae is a dessert that consists of one or more scoops of ice cream topped with various sauces, whipped cream, nuts, cherries, or other ingredients.

Bing Chat

While this example is oversimplified in the extreme, the basic model holds. Generative AI learns from enormous amounts of data and generates a response in natural language using a statistical model that it creates from that data.

How humans learn and respond

Human consciousness is complex and humanity’s insights into that complexity, while growing each day, still suffer from many complicated dark areas. However, in the ever-growing body of literature on the topic, philosophers and scientists are slowly unlocking the mysteries of the human mind.

A philosophical movement known as Functionalism holds that the human mind operates on a similar model as computer systems. Even though digital computers and the “wetware” that makes up human brains and central nervous systems use different methods for accomplishing data processing, they are similar enough to make an identity comparison.

Put simply, a functionalist argues that mental states like being in pain (the most popular example in the literature) is caused by certain inputs such as stubbing your toe, a change in your brain in central nervous system which produces certain outputs like crying or wincing. If that model of inputs, state change, and outputs sounds familiar, it’s because it’s the same model that computer systems operate on.

Philosopher Daniel Dennett has articulated this position for a modern audience in an accessible and engaging way better than most. His particular grinding axe is to explain creativity and consciousness in terms of Darwinian processes. For our purpose, the creative mechanism, however, isn’t as important as the mechanics that the mechanism produced.

Dennett’s argument is that human creativity is a process of learning, aggregation, and reproduction from the accumulated data. In other words, humans do what complex AI systems do just using different “hardware” to do it.

Writing about IBM’s Deep Blue AI system beating reigning chess Grandmaster Gary Kasparov, Dennett notes that Deep Blue’s creators could not design the types of chess games that could beat Kasparov. “Deep Blue can.” says Dennett. He’s alluding to the mechanics that creates winning chess games. That algorithmic machine is better than the brain-process machine that created it–at least at chess. Even so, the way that Deep Blue develops winning chess games is, in every way that’s important, the way that Kasparov does it. Dennett writes,

I am sure many of you are tempted to insist at this point that when Deep Blue beats Kasparov at chess, its brute force search methods are entirely unlike the exploratory processes that Kasparov uses when he conjures up his chess moves. But that is simply not so–or at least it is not so in the only way that could make a difference to the context of this debate about the universality of the Darwinian perspective on creativity.

Kasparov’s brain is made of organic materials, and has an architecture importantly unlike that of Deep Blue, but it is still, so far as we know, a massively parallel search engine which has built up, over time, an outstanding array of heuristic pruning techniques that keep it from wasting time on unlikely branches. There is no doubt that the investment in R and D has a different profile in the two cases; Kasparov has methods of extracting good design principles from past games, so that he can recognize, and know enough to ignore, huge portions of the game space that Deep Blue must still patiently canvass seriatim. Kasparov’s “insight” dramatically changes the shape of the search he engages in, but it does not constitute “an entirely different” means of creation.

PRESIDENTIAL ADDRESSES OF THE AMERICAN PHILOSOPHICAL ASSOCIATION, 2000-2001, p.21

Writing about sundaes

If this is correct, then the way you or I might learn about and write about sundaes isn’t, in many relevant ways, that much different than the way ChatGPT does it. We learn about ice cream and chocolate toppings as early (and often) as our parents or guardians would let us. We learn that the linguistic symbol “ice cream” refers to something creaming and delicious and learn to associate the two. We learn that ice cream with certain other ingredients (which we also learn to associate with certain words) all served together are identified by the word “sundae.” Over time we learn how to make the delicious treat and may even get into making them professionally.

One day you’re talking to someone from another country who has never heard the English word “sundae” and they ask you, “What is a sundae?” “Oh, an ice cream sundae is a dessert that consists of one or more scoops of ice cream topped with various sauces, whipped cream, nuts, cherries, or other ingredients.” Sound familiar?

Where did you get this definition? You certainly didn’t know it upon entering the world at day 0 of your life. You learned it over time and created the definition out of an aggregation of other definitions and firsthand experiences to which you applied a language model–symbols that refer to those things.

In many ways, isn’t this the process of formal education? Read these books on a particular topic so you can aggregate the perspectives of the different authors and “form” a “perspective” about the topic. You’re then asked to write a paper on the topic. What is the paper? Is it more than an aggregation of all the research you’ve done? Aren’t you synthesizing all the input data and, somehow, you “create” a perspective on what’s important, what you should emphasize, and what you should leave out? And how did you come to learn how to do all that? Weren’t you trained by parents, teachers, and that YouTuber on the internet?

Computers don’t have experiences

There’s an important qualifier (or objection depending on how important you think it is) that seeks to differentiate between the way an AI model learns and the way a human learns. It has to do with the nature of conscious experience and is one that doesn’t land at all with people in Dennett’s camp. The objection goes something like this:

The way an artificial intelligence learns about objects is by processing 1s and os. It doesn’t have experiences and has no inner, conscious life. Computers consume and process information digitally while humans consume and process information experientially.

Using the example of the sundae, you and I learn about the sundae using language, yes, but we also learn about sundaes by eating and enjoying them. We gain a tremendous amount of information about them through the five senses and what those experiences do to our brain rather than just processing words about sundaes. It may all be data at the end of the day but it’s a very different type of data than the type computers process. That difference makes all the difference.

In my view, this is a substantial and relevant position to consider. However, even if true, the objection doesn’t really impact the focus on the method of learning we’re discussing here. Information is brought to us in bits and bobs and our brains seem to aggregate, sort, and apply a weighting to that information. This process then allows us to create a mental and linguistic model of the world. This is what functionalists like Dennett care about when it comes to artificial intelligence. It’s also the salient point when it comes to questions about intellectual property and using AI tools to assist with creating content.

Who is the author?

This brings us to the germane point. If all creative work is a product of incorporating, processing, reorganizing, and incrementally adding to the work of others from the work of others, how should we legally, morally, and aesthetically view the creator of a “unique” work? Further, does it matter whether the creator is human or machine if we agree the processes by which the creations are made are the same between the two?

Dennett believes the answer is that there is no substantial difference in the way humans and machines create. Both are driven by the exact same evolutionary processes and humans don’t possess some unique deus ex machina that enables them to create from whole cloth. The fact that humans want to protect this creative act is merely psychological.

Biologically, an evolutionary process creates intermediaries on its way to speciation. As the intermediaries disappear, natural selection leaves “islands” that are “surrounded by empty space” says Dennett. This gives the illusion that those islands were never joined by intermediaries even though, biologically, they had to be. Something similar may be going on with our psychology when it comes to creativity. He writes,

We might expect the same sort of effects in the sphere of human mind and culture, cultural habits or practices that favor the isolation of the processes of artistic creation in a single mind. “Are you the author of this?” “Is this all your own work?” The mere fact that these are familiar questions shows that there are cultural pressures encouraging people to make the favored answers come true. A small child, crayon in hand, huddled over her drawing, slaps away the helping hand of parent or sibling, because she wants this to be her drawing. She already appreciates the norm of pride of authorship, a culturally imbued bias built on the palimpsest of territoriality and biological ownership. The very idea of being an artist shapes her consideration of opportunities on offer, shapes her evaluation of features she discovers in herself. And this in turn will strongly influence the way she conducts her own searches . . .

PRESIDENTIAL ADDRESSES OF THE AMERICAN PHILOSOPHICAL ASSOCIATION, 2000-2001, p.24

The implications of this may not sit well. It implies that creative genius is not really genius at all–at least not in the way we typically think about it. Creative genius Steve Jobs embraced and actively practiced Pablo Picasso’s famous phrase, “Great artists steal.” At the end of the day, we’re all stealing all the time. But we don’t call it stealing, do we? We call it creativity because we incorporate, mash it up, add to and take away, and then generate.

The truth of the matter is that generative AI tools are doing the same thing.

It’s just a tool

In fact, people in the pro corner of generative AI embrace this idea and view the technology as a tool that can enhance what humans are able to do rather than replace humanities role in the creative act altogether. At the time of this writing, Microsoft is preparing a generative AI tool called CoPilot that they will be building into all its Office suite of products (and more). Their marketing pitch is that by using generative AI, humans will be freer to focus on things that give them meaning rather than mundane tasks that can be automated. The technology gets rid of drudgery while preserving the aspects of creative work that humans can enjoy. Here’s the setup for their pitch.

Humans are hard-wired to dream, to create, to innovate. Each of us seeks to do work that gives us purpose — to write a great novel, to make a discovery, to build strong communities, to care for the sick. The urge to connect to the core of our work lives in all of us. But today, we spend too much time consumed by the drudgery of work on tasks that zap our time, creativity and energy. To reconnect to the soul of our work, we don’t just need a better way of doing the same things. We need a whole new way to work.

Introducing Microsoft 365 Copilot – your copilot for work, https://blogs.microsoft.com/blog/2023/03/16/introducing-microsoft-365-copilot-your-copilot-for-work/

Of course, if the arguments by functionalists are correct, then human creativity and innovation is on a continuum with CoPilot’s and not on a different plane altogether. Still, the pitch resonates. When the digital spreadsheet was introduced, there was a certain risk for those in math-based jobs like accountants and mathematicians. A digital tool was able to do what formerly was a time-intensive, specialized skill. In fact, what did happen was that accountants and mathematicians used spreadsheets to save a lot of labor and drudgery so they could focus on other things like solving complex problems and exploring and developing new ideas.

The oxymoron in all this is that if creativity and learning just is aggregation, reorganization, and regurgitation, then wouldn’t “solving complex problems and exploring and developing new ideas” just another phrase for “aggregating, reorganizing, and regurgitating” the ideas that already exist in the world? Science, art, and philosophy do “advance” which non-trivially seems to mean that the ideas being developed on the whole are better than what seems to be the aggregated, reorganized, and regurgitated parts.

But Dennett would argue that this exactly what evolution does. Complex biology was built out of the most rudimentary biology. Consciousness (whatever that turns out to be) in humans and other animals didn’t exist in the beginning. But over millions of years of aggregation and reorganization, consciousness emerged.

And here we are reading and writing in a natural language and creating machines that can do that too.


More
articles

More
news

The Bishop’s Hands

Rivers of strife pour through his veins. White age furls rapidly, time’s rapids reach As fingers. Time does not have...

Bathsheba

I see her bathing, her hills and valleys Are ripe for conquest, Bathsheba thrills me. Her dull sweet husband with...