Imperfect

autogeneration is human

Inspired by Melvin's more creators, less autogenerated trash.


Like Melvin, I can resume listening to podcasts. Unlike him, I can tolerate them being voiced by, edited with, or created through generative AI. Yes, he's right that AI art can be "easily identifiable", as I hint at in multimodal ai laziness. However, how many false negatives go unnoticed? Seeing so much AI-generated content in your surroundings doesn't mean your detection rate is as high as you think.

Even if it somehow is, questions like Melvin asking his partner "Why are you listening to AI music?" have more than sufficient answers. Melvin's "easily identifiable" "nonsense" can very well be his partner's "fun or useful" "curiosity". As for the intersection of AI and music, I discuss that more in detail in sickening ai music if you're interested. As for adjacent discipline, I even see the benefits of Sora by OpenAI's ability to "create videos of yourself and your friends". Using it in addition to or in ways impossible through other more manual means can be a wonderful sight to behold.


Drawing yourself ever closer to the "natural" and not the "artificial" sweeps away the blogs, podcasts, music, paintings, and other Internet offerings you know and love. Riffing on that dichotomy, the value of art or other products can derive from factors including or even excluding "the time a person dedicated to a task". What happens when "artificial" works meet "natural" works' dedication of time? Working efficiently with AI systems is often more complex than the frictionless "15 seconds" processes critics commonly throw around. As for the "natural" part of the equation, Kix says in Re: Comprehension Debt:

It’s pretty much guaranteed that there will be many times when we have to edit the code ourselves.

What happens when other factors of "artificial" works cause their value to trump "natural" works? Your value judgment of Italian brain rot made with generative AI compared to someone's digital paintings made with Clip Studio Paint doesn't necessarily inform the others' value of either, let alone the overall quality of either. Even "autogenerated trash" can coincide with "making space for incredible things."


Related dichotomies like "real" versus "fake" and "soul" versus "soulless" make me question the utility of such abstracted discourse. It's amusing to witness how dehumanizing certain people's treatments of generative AI, its outputs, and its human collaborators is. How intrinsically laborious and human is the supply chain for AI content, production, and propagation which critics dislike? Consider the following excerpts from Karpathy's Animals vs Ghosts:

They [LLMs] are trained on giant datasets of fundamentally human data, which is both 1) human generated and 2) finite. What do you do when you run out? How do you prevent a human bias?

Frontier LLMs are now highly complex artifacts with a lot of humanness involved at all the stages - the foundation (the pretraining data) is all human text, the finetuning data is human and curated, the reinforcement learning environment mixture is tuned by human engineers.

It's funny how, that in some parts of my Internet, I feel like skeptics and critics of generative AI systems propagate them and their outputs better than enthusiasts and evangelists do. Critics being so frequently repulsed by being inundated with AI content shows the seriousness of the underlying technology. I find it exciting how they showcase new releases, developments, and use cases that my own AI news sources gloss over. I'd get much less AI exposure if they pivoted, showing just how much haters are pollinators after all.

How can convictions for outing "soulless" art shift toward sharing counterparts with "soul" instead?

For that matter, how long will differences between the two be discernible, if they even still are?


Want to reach out? Connect with me however you prefer: