Eh šļø
Cognitive dissonance, artificial intelligence, cloning my voice, and Burt Reynolds
Sometimes I think the secret to a happy life is the ability to manage cognitive dissonance. Iāll give you an example. Over brunch, a friend tells you they believe the apocalypse is imminent. You agree, not because you want to be polite, or because youāre playing along with a bit, but because you read the news too, and so itās obvious that humanity is fucked. Believing this, you might skip the egg-white omelet and opt for a full stack of chocolate chip pancakes ā if weāre fucked, calories donāt count. Or, you might dine and dash like youāre Ricky Schroder in a very special episode of Silver Spoons ā if weāre fucked, thereās no sense paying your bills. Or, if youāre really embracing the apocalypse now scenario, you might order the chocolate chip pancakes, skip out on the bill, steal a motorcycle and a pair of ass-less leather chaps, and go marauding in the wasteland. Point is, you believe thereās no future, and yet you act as if the future matters. This is cognitive dissonance. The ability to manage it is what makes it possible to enjoy brunch, on the one hand, and stomach the news, on the other.
Lately, Iāve been working overtime to manage the cognitive dissonance associated with artificial intelligence. As a writer, I spend an unhealthy amount of time in online writer communities that tend to view everything about AI as unethical and immoral. Half the posts are about how the AI crowd consists of Bond villains bent on stealing everyoneās material, putting us all out of work, and sucking up the Earthās resources. I (mostly) agree with that sentiment. The rest of the posts are about how writers who use AI, in any way, are class traitors who should fuck off and die, or at the very least refrain from calling themselves writers. I know these posts are written by humans, but I canāt help but notice that, in the aggregate, they commit one of the sins people level at AI writing, namely that itās cookie-cutter slop. Turns out, originality and voice are difficult for humans and machines. Anyway, I (completely) disagree with the class traitor genre of posts.
On the other side of the cognitive divide thereās the world as it is, not as we wish it to be. Here, AI is increasingly ubiquitous, not because it has ā or will ā live up to Sam Altmanās wildest dreams hype, but because there are countless buggy, not-quite-ready-for-prime-time AI tools providing real utility to real humans who arenāt spending their time raging against the machines. Put another way, you can go full-ostrich on the AI Revolution, and you can scream into the sand that it must stop now, but the world will continue spinning.
When it comes to AI, I have one foot in each camp. My heart is with the idealists, but my head is with the realists. In practical terms, that makes life tricky in the same way that I imagine being an undercover cop is tricky. At work, I am pro-AI. Among my fellow writers, I am anti-AI. Neither one of these identities is core to who I am, but like the undercover cop, my life and livelihood depend on saying the right thing, at the right time, to the right people. More importantly, each situation requires me to believe what Iām saying, even though I contradict myself. And in fact, I do believe that AI is:
Awful / Wonderful
Depressing / Exciting
Over-hyped / Under-rated
Wasteful / Efficient
Destructive / Constructive
I could go on, but you get it. Two conflicting ideas, one human brain, and a buttload of cognitive dissonance. Which brings me to the week that was.
At work, I edited a piece about the AI gender gap. Turns out, men are using AI at higher rates than women, which means women are in danger of falling behind. Unlike the women in the writing communities I belong to, the woman who wrote the piece doesnāt have the luxury of going full-ostrich on AI. Actually, she doesnāt believe any woman has that luxury, regardless of occupation; thatās why she wrote the piece.
While editing that piece, I came across an essay about AI denialists, aka the ostrich crowd. I recognized my peers among the denialists, but perhaps more importantly, I also recognized myself.
Also at work, my boss said theyād reimburse me for subscriptions to Claude, ChatGPT, and other AI tools. Later that day, I used Claude to perform a task that we previously wouldāve considered important, but not worth the time. With Claudeās help it took a few minutes instead of a few hours.
In my spare time, I joined my friend Alex Dobrenko`, who hosted a group writing session on Zoom. Alex had to skip out in the middle of the session, so he put Seth Werkheiser in charge. Seth put on some music. I wrote my ass off, and at the end of the session, I complimented the music. Thatās when Seth dropped a bomb: The music Iād been jamming out to was AI-generated. Seth joked that every writer on Substack would come at me with pitchforks. It was funny because it was true. Sort of.
Also in my spare time, I signed up for a subscription to ElevenLabs, an AI company that specializes in audio. I wanted to try their voice cloning tool. The idea of cloning my voice sounded creepy and strange, but it also sounded cool and (potentially) useful. Iāve always wanted to create audio versions of my stories. In fact, my dream isnāt to publish books, but to produce audiobooks, because audio is my primary way of experiencing fiction and nonfiction. Iāve worked on performing my own books and experimented with hiring voiceover artists. The results havenāt been great. Meanwhile, Substack provides readers who use their app with an AI voice that reads my stories. And of course there are dozens of non-Substack tools that do the same thing. In other words, my stories are already being performed by AI, whether I like it or not.
I am he as you are he, as you are me and we are all together
To clone my voice, I asked my friend Todd to make a file of me telling him Situation Normal stories for a podcast we did together. ElevenLabs said I needed at least 30 minutes of material; Todd was able to put together 3 hours of me.
It took a few hours for the AI to clone my voice and a few more hours of tinkering to dial it in. Actually, the tinkering continues, but thatās another post. The point is that in a single afternoon, I used an AI tool to produce Clone Michael, who it turns out, does a far better job of reading my stories than I do. I was upset / excited. See: cognitive dissonance. Anyway, this is what Clone Michael sounds like reading a Situation Normal story called āWeāre doomed, says the barista.ā
I donāt know what will become of Clone Michael. Thereās more tinkering ahead and more experiments. My hope is that Clone Michael will walk, run, and eventually fly, where real Michael only managed to crawl. If Clone Michael ends up reading my stories, itāll be because I believe AI will empower, not replace, me.
But maybe thereās an AI thatās better than Clone Michael. While futzing around on the ElevenLabs website, I noticed that they also offered licensed celebrity voices. Again, I felt like I was looking at something that was creepy / strange / cool / useful. One of those voices was Clone Burt Reynolds. Naturally, I needed to know how Clone Burt Reynolds compared to Clone Michael, so had it read the same story.
As turned out, I liked Clone Burt Reynolds a lot better. Which makes sense. Itās Burt freaking Reynolds! And I guess thatās the point. For all the whiz bang technology that goes into AI, itās the quality of the inputs that determine the quality of the outputs. As computer programmers say: garbage in, garbage out.
Or maybe not.
Burt Reynolds was better than Clone Burt Reynolds, but Clone Michael was better than me. In other words, the same AI tool made one thing (me) better and another thing (Burt Reynolds) worse. Thatās the deal with tools. Fire can keep you warm, and it can burn you; a printing press is equally capable of spreading lies and truth; wheels can bring food to starving people and transport an army bent on starving the people; the internet connects society and breaks it apart. Maybe thatās why Iām torn between the two AI camps. Iām primarily worried / hopeful about people, and far less freaked out / geeked out about tools like AI.
A new project from yours truly: Slacker Noir
I launched a new newsletter called Slacker Noir. Itās a place for me to talk about crime & mystery fiction and share book news, like how Iām making good progress on a sequel to Not Safe for Work. Slacker Noir is free, and true to the slacker ethos, Iāll send out new posts when I get around to it.
A book for people who š this newsletter
Not Safe for Work is a slacker noir murder mystery set against the backdrop of the porn industry at the dawn of Web 2.0. Like everything you read here, the novel is based on my personal experience, and itās funny as hell. If you love Situation Normal, thereās a 420 in 69 chance youāll love Not Safe for Work.
Not Safe for Work is available at Amazon and all the other book places.
*The ebook is .99, so you canāt go too far wrong. Just sayinā.
IAUA: I ask, you answer
Which AI camp are you in? Hint: Both and neither are acceptable answers.
Egg-white omelet, or chocolate chip pancakes?
š§š¤?
Burt Reynolds?!
Am I wrong, or am I wrong?
New here?
Drop your email address in the box to receive future editions of Situation Normal. And if youāre a long-time situation normie who wants to support my work, please consider upgrading to a paid subscription.


I relate to this a lot. I am with the anti ai crowd for my own writing but teach business writing to undergrads and have had to learn about it figure it out as they are definitely already using it and will need to use it at work. But I still feel a general dread about where itās all heading.
Thanks for your honesty on the subject! Iāve read a lot of hot takes, not enough nuance. I have mixed feelings, too.
I listened to your book with a built-in iPhone screen reader, which I prefer to your voice clone. I knew the robotic iPhone voice was going to be clunky, and it was. However, your voice clone sounded very close to your actual voice aesthetically, but it botches tone and jokes throughout because it doesnāt understand context beyond āthese are negative words so say them angrily.ā This has been my experience with generative AI. Big promises, big social costs, mild disappointment.
I wish the tech industry bet big on 3D printing instead, which seems like an amazing product that does pretty much exactly as advertised. Imagine Google and Meta and Microsoft competing to make 3D printed food and 3D printed circuit boards, instead of attempting to automate the arts and entertainment industry out of a job unsuccessfully.