AI vs Sci-Fi (Publishers)




One iron rule of technology is that any technology that can be used for pornography will be used for pornography. Another is that any technology that can be used for grifting will be used for grifting. The latest grift involves people using AI to generate science-fiction stories in an attempt to sell them to publishers.

Amazon has also seen a spike in AI generated works, although some are being honest about the source of the text. Before the availability of these text generators, some people would steal content from web pages and attempt to sell them as books. This sort of theft is easier to catch than AI generated text and given the current state of detection software, it is likely that AI generated text will generally be able to avoid automated detection. This means that if a publisher wants to sort out AI generated text from human generated text, they will need humans to do the work. As would be expected, some types of work are easier to detect as AI generated than others and the current AI text generators are better or worse at certain types of text. As such, in some cases a human reviewer need not discern whether the text is AI generated, they can simply do what they have always done, which is weed out the bad text. Fortunately for the science-fiction publishers and writers, AI is currently bad at writing science fiction.

But the practical problem is that certain publishers are being flooded with AI generated submissions and they cannot review all these texts. Since an AI can generate a story from a short prompt, a person using one can rapidly create a swarm of stories. In terms of the motivation, it seems to mostly be money—the AI wranglers hope to sell these stories.

One magazine, Clarkesworld, has seen a massive spike in spam submissions, getting 500 in February (contrasted with a previous high of 25 in a month). In response, they closed submissions because they lacked the resources to handle the deluge. As such, this use of AI is harming publishers and writers. As would be expected, some have blamed AI for this and point to this as yet another harm caused by AI. But it is obviously unfair to blame AI.

From the standpoint of ethics, the current AI text generators lack the moral agency needed to be morally accountable for the text they generate. They are no more t0 blame for the text than the computers used to generate spam are to blame for the spammers using them. Obviously, the text generators are just a tool being misused by people hoping to make easy money and who are not overly concerned with the harmful consequences of their actions. To be fair, some people are probably just curious about whether an AI generated story would be accepted, but these are presumably not the people flooding publishers.

While these AI wranglers are morally accountable for the harm they are causing, it must also be pointed out that they are operating within an economic system that encourages and rewards many types of bad or unethical behavior. While deluging publishers with AI spam is obviously not on par with selling dangerous products, engaging in wage theft, or running NFT and crypto grifts, it is still the result of the same basic system that enables and rewards (and often protects) such behavior. In sum, the problem with current AI is the people who use it and the economic system in which it is used. AI has simply become yet another tool for spamming, grifting, and stealing. But there is some interesting potential here.

As noted above, AI generated fiction is currently quite bad (although humans obviously also generate mountains of bad fiction, some of which get published). But it is likely that it can be improved enough to be enjoyable, if low quality, fiction. Some publishers would see this as an ideal way to rapidly generate content at a low cost, thus allowing them more profit. This would, obviously, lead to the usual problem of human workers being replaced by technology. But this could also be good for readers.

Imagine that AI becomes good enough to generate enjoyable stories. A reader could thus go to an AI text generator, type in the prompt for the sort of story they want, and then get a new story to read. Assuming the AI usage is free or inexpensive, this would be a great deal for the reader. It would, however, be a problem for “working class” writers who are not celebrity writers. Presumably, fans would still want to buy works by their favorite authors, but the market for lesser-known writers would likely become much worse.

If I just want to read a new classic style space opera with epic battles and powered armor, I could just use an AI to make that story for me, thus saving me the time of finding one already written and paying more for it. And if the story is as good as what a competent human would produce, then it would be good enough for the reader. If I want to read a new work by Mary Robinette Kowal, I would need to buy it (yes, one could pirate it or go to a library). But, as I have argued in an earlier essay, this use of AI is only a problem because of our economic system: if a writer could write for the love of writing, then AI would largely be irrelevant. And, if people were not making money by grifting text with AI, then they would probably not be making AI fiction except to read themselves. So, as would be expected, the problem is not AI but us.

Originally appeared on A Philosopher’s Blog Read More



When to believe lies

When to believe lies

Exploring how our natural inclination to believe in the honesty of others shapes our communication and social interactions. Tim Levine...