Enki's Website

The Algorithm's Verdict: When Writing Well Is Suspicious

I ran one of these AI detection tests and stared at the screen for a long time. 78% probability that it was generated by artificial intelligence. It wasn't just any text. It was something I had thought through, structured, and carefully reviewed. I had followed everything I was taught for years: clarity, coherence, logical progression. Turns out that's now a robot's fingerprint.

The ironic part is that the analyzer is right about its observations, but wrong on the conclusion. It says my style is "too structured, with clear subtitles." Of course it is. Is confusion really the seal of humanity? For decades, good professional writing was defined by the ability to order chaos. If I organize my ideas into a logical progression, it's not because I'm using a language model, it's because I respect the reader's time. But that same neatness is what gives me away.

The report also points out the consistent use of metaphors. In my text I spoke of "personal system architecture" and "uninstalling models." The analyzer says it's "quite uniform." For me, that's simply having a through-line. If I use a technical metaphor to explain a technical concept, it's coherence, not a pattern of automatic generation. But AI has learned to be coherent, and now we pay the price for that coherence.

What bothers me most is the point about tone. The detector marks me as "confident, categorical, and somewhat provocative." It says it's typical of content tuned for impact. Since when is having a firm opinion synonymous with being a bot? I write about the future of work, about AI, about structural changes. These are topics that require defined positions. If I start doubting in every paragraph, putting in conditionals for fear of sounding artificial, the text becomes useless. But if I affirm with security, the algorithm raises its hand and says: "this isn't human, no human is this sure."

And here is where I hit the wall many of us crash into. What exactly am I expected to do? The analysis implicitly suggests that to sound human I need irregularity, less fluidity, maybe more mess. But I'm not writing poetry or a personal diary entry. I'm writing about professional topics. I can't arbitrarily stick in an anecdote about coffee going cold or street noise just to lower my AI score. That would be dishonest. It would be writing poorly on purpose just to seem authentic.

If the topic is abstract—the future, strategy, technology—the language tends to be abstract. AI has been trained on millions of articles about "the future of work", so when I write about that, I use similar words. Not because I copied them, but because it's the vocabulary the context demands. The detector sees lexical overlap and assumes algorithmic plagiarism, when in reality it's just professional convergence.

Fluidity is perhaps the most frustrating accusation. "Too polished, without errors or irregularities." Should I leave spelling mistakes? Should I make disjointed paragraphs to prove I'm flesh and bone? Technical perfection was always the goal. Now it's proof of the crime.

I find myself at an absurd crossroads. If I write as I was taught, I'm an algorithm. If I write messily, I'm an amateur. If I use consistent metaphors, I'm a machine. If I have a firm opinion, I'm optimized content.

In the end, I think the problem isn't my text. The problem is that we've normalized artificial intelligence as the standard for "correctness." What we used to call professionalism, we now call automatic generation. And while detectors keep looking for imperfection as proof of life, those of us who take writing seriously will keep getting that 78% in our faces, wondering if it's worth polishing the next sentence or if we should leave it half-finished just so they believe we're real.

I don't have a solution. I just have the frustration of knowing I've written the best I can, with sense and structure. And that, today, is the most suspicious I can be.