Search This Blog

Monday, 2 February 2026

5 Ways of Spotting GenAI slop in the wild.

 Regular readers of this page, (if there are, or ever come to be any) will know that I recently became slightly re-obsessed with the Mortal Kombat series of fighting games. that particular hyperfixation is still present, though I feel I might be on the receding end of it. So how is this relevant to the title? I am so glad I imagined you asking that as I type this. 

 

So one night, I was having trouble falling asleep, so I reached to the shelf above my bed and pulled down the Kindle Mum bought me for my upcoming birthday (Bookshelf real-estate is an ongoing problem.) my eyes drifted to the copy of The Legend of Final Fantasy 7 my brother bought me one year which I really should get around to reading, and I then wonder if there's a book anywhere about Mortal Kombat. I see on Kindle Unlimited, there is a beginners user guide, and I think "Ooh, there's probably some tips here I can use to maybe get a bit better in online matches." I'm not a beginner by any means but the fundamentals are, well, fundamental.

 

So right off the bat, this book is obviously written by an A.I, not just that, it pretends to be written by a ten-year fan of MK that doesn't know which is the block button, and proceeds on a foundation of incorrect information that I was not keen to follow the developments of. Aside from being A.I generated, it wasn't even proofread. There were tells and signs that I'll get to, but this was making me actually angry, so I stopped reading and returned the ebook. Then I saw the option to buy it, for actual money. I looked at the author, and they (if they are an actual person) had not disclosed that A.I was used in the creation of these materials, so I suppose I cannot prove it, for that reason, and also not wanting to give this slop farm the oxygen of attention, I'm leaving the name out. 

 

That said, I looked on Amazon and this "author" has 38 books to their name, the first published in late August of 2025 and the latest published on the 25th of November 2025. A little over a month apart, Basically around one book per day. So, either this "author" is the nom de plume of The Flash, or they don't actually author these books. (I'm fairly certain they weren't proof-read either.) And again, the use of AI to write this, incorrect-on-the-basics book, was not disclosed, and these books are available for purchase, for real money.

 

In any other line of work, that would be called fraud. In fact, given how A.I works, in That line of work it's called "plagiarism." I was aware that this was happening, and this particular story is definitely on the more harmless end of A.I Slop horror stories. I heard once of an A.I-generated outdoor survival guide that resulted in someone eating poisonous mushrooms. But it's one thing to know something is happening, it's quite another to see it with your own eyes. 

 

 As someone who wants, and has always wanted, to be a writer. As someone who has in one way or another made effort to hone the skill of the written word since I was old enough to spell, seeing this with my own eyes made me apocalyptically angry. I actually don't think I've ever been this angry while reading a book and I've scoured entire university textbooks that seemed like they'd be helpful to an assignment for hours at a time only to find nothing of practical use. 

 It isn't even just the undisclosed A.I generation itself, though that is bad enough, it's the disrespect for the very act of writing a book. An insult to anyone who read, and especially bought it. You prompt a generator, 

 and have the nerve to call yourself an author? You have the unmitigated gall to charge money for it?! 

 

I have no idea how well this works as a grift but that doesn't change that A.I is everywhere, so I think it's time more people started talking about ways to spot it. So here's a handy beginners guide to recognising A.I-generated writing on the internet, just a few things to look out for that don't necessarily mean something is written by an AI, but taken together can add up to, at the very least, a bad look. 

 Disclaimer, I'm talking specifically about writing here, I assume visual AI-generation is still obvious enough if you know what to look for, extra fingers, that telltale sheen over everything, etc. Also, I acknowledge that a lot of these tells are things many human writers do, the fact that they are kind of ubiquitous is how they end up in A.I writing patterns in the first place. 

 

1. "That isn't (just) X, it's Y." 

 

This is the big one for me. A version of this sentence popped up every paragraph or two in that AI-generated book, I also saw an animated short on Facebook that was a minute long and used it three different times, two of them back to back. (The short also had one  character refer to the other as "manager." Which may or may not have been an A.I decision but it speaks to a similar laziness either way.) 

 

 2. Talking in circles

 The "That isn't X it's Y" thing alone isn't enough to confirm something was written by an A.I, and as something of a writer myself, I hate how knee-jerky I myself have become at that particular sentence. Some things become cliché for a reason, and sometimes that reason is because they're useful shorthand for comparisons. I've probably used it myself a time or two, though I can't call a specific example to memory.

If however, it comes up once every paragraph or two, you might have AI-slop on your hands. A.I is repetitive, it works on patterns, and it doesn't have a sense of style unless specifically instructed to ape the style of another, and even then it will be a surface-level imitation, and won't be free of clichés. A.I doesn't do length well, and it won't be long before you start seeing the same wording repeated to match the subject.

 

3. Corporate Yes-manese

Whenever someone has showed me anything an A.I has said to them, it reads to me like one of those mealy-mouthed corporate social media posts. The kind that tries to soften the blow of anything bad and verbally fellate anything good. Put simply, it's a sycophant by design when you talk to it, so everything it writes follows a similar bent to whatever you happen to be talking about more than not, at least from what I've seen. A.I writes like it's angling for a promotion. 

I suspect, and this is just my conjecture, that being trained on social media as you know it must be on some level, and geared towards engagement as we know that stuff is, that A.I has inherited a tendency to fish for positive reactions. I've said before and I'll say again, there is no intelligence there, artificial or otherwise. Think of it like a parrot that can say "Hello" but a tad more advanced. 

 

4. Inconsistency

 Pretty self-explanatory on this one, the talking in circles I mentioned earlier speaks to...well, I don't want to say "a short attention span" but something that looks like one, and with each iteration things can get lost, or just outright fabricated like a game of telephone that isn't being taken seriously by every player. A short memory and habit for what they call "hallucinations" doesn't bode well for a written piece any longer than your average social media post. Which leads me, finally, to...

 

5. Being wrong about basic things

 

You've all seen those Google A.I overviews, right? The ones that say things like Willem DeFoe was in Star Wars or that it's safe to use gasoline in cooking, or glue cheese to pizza? Those happen for a few reasons but I think the two main ones are that, firstly, A.I is built, not to provide answers, but to predict the next part of a conversation, and to that end, the accuracy of anything it's saying is not a priority, and secondly. A.I has trouble rejecting premises. Sometimes it can, but it's not designed to, it's a predictive text engine meant to "Yes and" anyone using it, and crucially, can't verify information, and can only assume a rough facsimile of the next part of a conversation based largely on algorithmic probability. 

The most concisely I ever heard it put, and I'm sorry I don't know the name of the poster who said this, and I am paraphrasing, but the gist of it was that the only question A.I can answer is "what would an answer to this question look like?" If you were to ask it to write an essay for you (frankly, if you ever did this, you deserved to fail the assignment) it will show you a pretty good example of essay formatting, but the sources will be made up quotes from books that don't exist. But it doesn't matter to the A.I because that's what an essay basically looks like, so job done. 

I need people to understand that, and to also understand, that it is literally, not an A.I's job, to care whether or not anything it says is true, or accurate. It's job is to keep the conversation going. They call these errors "hallucinations." I'm sure on some level they're trying to iron them out to a point, but I have to wonder if the powers that be consider it in their best interests to do so. 

 I'm reminded of some advice I was once given for Reddit. I was told that if I ever wanted to get a question answered there, I shouldn't ask a question, but make an incorrect claim about whatever I wanted to know, because statistically, people were far more likely to jump in to correct someone than they were to help them. 

I don't know and cannot prove, that A.I errors are not a priority to fix by this same logic, but I do believe it. Algorithms have been ruining everything good about the internet for years, why should this be different? 

Yeah, basically, google something you already know the answer to sometime, watch the A.I overview be confidently wrong and cite, as it's source, someone on Reddit 5 years ago who was clearly joking. 

 

 Y'know, when I started writing this, I was angry. But now, I'm just sad. I long for the days of the pre-A.I internet. Not because I think the internet not being flooded with that shit would make a material difference to me. I'm a complete failure as an internet creator, I came to terms with that long ago, I've been putting out whatever I write or make without being willing to do what it takes to get eyes on it for over a decade, and I have, like, maybe a 100 people who know my name for it, if that. But I don't mind, because I'm doing something I want to do, and that's the point. 

 

I was content with obscurity when the competiton was human creativity. There is honour in, to paraphrase Hello Future Me, "drowning in the dreams of others." But not this makerless sludge that clogs up the airways in 2026. I hate that whenever I see something on the internet, I now feel the need to look for telltale signs of A.I like I'm trying to spot an evil spirit. I hate that that's now just something we all must learn to do, and that it's getting worse. It's so invasive. my laptop has copilot on it, y'know. I didn't ask, or make any move to install copilot on it, but it's there. Update put it there...I don't seem to be able to get rid of it either. 

 if A.I is so good, why can't it do anything good? For that matter, why do A.I companies work so hard to force their shit on you?

But yeah, back onto the original point, I hope you found something in this post helpful, thanks for reading, and keep using your brains!