Yummy, Yummy Human-Written Content
I hereby swear I will not use AI to write this blog.
- Introduction (You Are Here)
- Epigenesis
Contents
If I produce real, human writing daily, will my writing have an impact on generative AI? I like and dislike gen AI-- I like what it actually does, but I don't like how people are using it. Even if I'm critical of it, and I wouldn't dream of using it to represent my opinions for me, it may keep being used forever. For awhile, it made me feel like even more of a doomer than usual. It felt like humans were totally helpless to the slop machine, and no one was discussing how bad it was in a way that felt like even beginning to seriously address the issue helpfully. It all was just, "If you use AI you're bad because the eternal spark of the human soul blah blah." Much as I love the human soul, an ephemeral argument doesn't address the real tangible problems (and benefits!) caused by a new technology, and it doesn't attempt to make the technology better for people.
Then, Adam Ragusea came out with his video, "The worst thing about 'AI'", and it was so Adam Ragusea of him. A lot of this video is about the difference between human mistakes and machine mistakes. I am a music major, so I'm intimately familiar with one of Adam Ragusea's mistakes, which was trying to convey a music opinion via game of journalist telephone, at Vox's request. However, the way he handled it and the way that he handles his mistakes in general (laying out the situation so we can see his perspective, pointing out what parts he actually regrets and which parts he thought were reasonable enough, being a bit self-deprecating, never doubling down if he knows he messed up, but never apologizing for things he does not regret) charmed me, and here I am, continuing to watch his channel all these years later.
I think my favorite thing about this latest video on AI is how long it took him to respond to this topic, in the face of this topic being the hot issue in broader society for the last 2 or 3 years. I was honestly surprised he hadn't said something sooner, because the guy loves saying things. It seems like he's quite happy to share a semi-lay opinion on a topic he's not intensely invested in here and there, even if I'd be uncomfortable doing the same in his position, and I like hearing what he has to say regardless of whether or not he's an expert. I follow him for a reason. However, I know he has a background in Journalism (this is one of the first things I ever knew about him!) and generative AI is a big problem for Journalism! I've been wondering what he thought, and I suppose he thought carefully, because now is the time that he chose to talk about it. There's something unusually patient and clear about the way he delivered this video. I like to think that he has been thinking about this, and has really thought about what the most important thing he COULD say on this topic was. His main idea was that fabulism ("making shit up" is his definition) is the worst thing a Journalist can do, and fabulism is what LLMS do (It's not just what they're best at, it's what they do), and humans can be deterred through social pressure, but LLMS cannot be deterred from fabulim and that makes them dangerous.
He quite aptly compared the problem of LLMs being fabulism with the problem of Russia being incentivized dishonesty. Sorry, Russians, that's also what I know your society for. Adam reached for the idea that a tank mechanic lying about putting on tank armor hurts the war, and that is a good and comprehensible example. I found the example compelling, however, because there has been a lot of discussion lately about what Trumpism is, and how it compares to Putin( & co.)'s political strategy. I can't recall right now which specific sources I initially heard this from, but the topic of the intentionally cultivated societal distrust in Russia has been covered pretty frequently since 2016, and we've only learned more about it with new investigations into Facebook, into troll farms, and into how those troll farms are used by governments.
I think, ultimately, the most comforting thing about this video for me is that he made it. Humans taking their time, thinking, and applying their own knowledge, is a valuable thing. I valued what Adam Ragusea specifically would have to say about this topic, and I was glad to see he was saying it. I think something that is undervalued by the conversation around AI, I think, is the trust relationship between authors and their reader. Adam Ragusea points that out when he discusses how important trust is for a society, and how Russia's military failure was due to that total loss of honesty-- and Adam drawing that connection was more meaningful to me because it is specifically him who drew it.
We also know these trust relationships between an author, creator, journalist, whatever and the audience are ripe for exploitation... and I ran out of time to finish writing this. Apologies.
Epigenesis
- Internet of Bugs's "AI Has Us Between a Rock and a Hard Place": This video is why I've got AI on the brain.
Introduction