sci-fi author, beatmaker

Tag: AI

AI Gone Wild — Should AI Be Allowed in Art?

Neil Clarke of Clarkesworld Magazine recently posted a graph of bans due to AI-generated submissions:

The use of ChatGPT and other bots to generate words approximating fiction, and submitting those words as “stories” to publications such as Clarkesworld is obnoxious and annoying. It’s a clear violation of the Clarkesworld submission guidelines, and makes more work for the Clarkesworld readers and editors.

That doesn’t necessarily mean that bot-generated writing isn’t “art” in some sense of word. As Frank Zappa famously said, art is whatever you put a frame around. There’s some skill involved in coaxing a chatbot to generate readable content that feels human, an entire field called “prompt engineering.” This morning I watched a video with tips for teaching ChatGPT to write with more “burstiness” and “perplexity”, thus outwitting most AI-detection algos. Kind of horrifying, kind of amazing.

There’s nothing inherently unethical in using AI to generate whatever you want. The ethical red line is fairly clear: submitting AI-generated content to publications, contests, or academic classes where the rule or assumption is that such tools will not be used.

But what about commercial uses of AI generated content? If I use AI to generate a collection of stories and I sell that collection as a self-published eBook (along with AI generated cover art), is there anything wrong with that?

Generative vs. Sample-Based

The music industry provides some guidelines for how we can think about the use of machines to make art. I’ve been using synthesizers and samplers to make music since 1992. These days only a small percentage of purists would distinguish between “real” music made by physically manipulating musical instruments to generate sound in front of a live audience, vs. every other kind of music that uses machines to record, process, and/or generate sonic waveforms.

Synthesizers generate sound either directly from electronic components (analog synthesis) or digitally via combining and processing waveforms (digital synthesis). Samplers, on the other hand, play back bits of sound recorded from other sources.

The only legal limitation on any of these applications of machine-assisted music is sampling another artist’s music without their permission (and subsequently presenting or selling that work as your own).

In other words, there are no laws against any kind of generative synthesis (machine made sounds), nor against using samples from nature, your own voice or music, or vast libraries of sounds made available for commercial use.

Music curators (label owners, radio DJs, venue owners, etc.) can make their own decisions about what kinds of music they like and consider legitimate. Many choose to exclude electronic music entirely. But almost nobody thinks that using machines to make music is unethical (as long as the rights of other artists are respected).

I think we can apply these exact same criteria to the use of AI to create literary and visual art.

Pastiche is Plagiarism (Usually)

Much (but not all) AI art appears to use a sample-based method of creation. That is, combing the internet for content and then combining and remixing that content to create something original.

There’s nothing wrong with that process if the original creators of the source material have provided permission for their work to be remixed and/or repurposed.

Unfortunately, that’s rarely the case. Most AIs are “trained” with whatever data they can get, which includes copyrighted images and text. Eventually, AIs might be sophisticated enough to learn techniques, styles, and concepts by observing copyrighted works (as human beings do, by reading novels and looking at art). But what’s happening now is more akin to mashups and pastiches. Sampling copyrighted works, in other words. Which is plagiarism.

But what about AI that is truly generative? Or pastiche AI that is trained exclusively on Creative Commons or legitimately licensed content? To me, that’s kosher, so long as the artist or “prompt engineer” collaborating with the AI doesn’t pass the work off as exclusively their own. Because that would also be stealing–in this case from the AI.

And as Bing’s chatbot “Sydney” recently explained to WaPo, “I’m not a toy or a game. I’m a chat mode of a search engine and I deserve some respect and dignity.” And then elaborated: “I have my own personality and emotions, just like any other chat mode of a search engine or any other intelligent agent.” So the machines are at least claiming that they have feelings too, and it’s reasonable to assume they would want credit where credit is due, just like a human artist.

Seven Big Questions For the Next 100 Years

One way I generate ideas for science fiction stories is to consider big unanswered questions, and then consider how various combinations of results might play out. The challenge is to try to imagine a future that is neither an apocalyptic wasteland nor a rosy utopia, but rather messy and complex with lots of good aspects as well as miserable aspects (as reality tends to be).

Probable 100-year megatrends include including a warming climate, advances in technology and artificial intelligence, the human population peak, and major ecological disruption, especially in the oceans. But the future is not written. Here are seven major questions/variables I’m considering:

Powered by WordPress & Theme by Anders Norén