Recently, I was watching a TV show that ended on a cliffhanger for season one. Naturally, I wanted to know if season two was coming, so I did a quick search. The top result promised some drama: “Big changes could ruin season 2 of this show.” I clicked, expecting something huge. Oh no! maybe they are replacing the main actor, or killing someone off, or changing the style?
Nope.
What I got was 1,000 words of AI-generated slop, loosely based on a vague quote by the director: “We learned a lot on season one, so we’ll be making some changes for season two.” That was it. The wider context of the quote suggested the crew had a tough filming schedule, so they’re spacing things out a bit more. A perfectly normal behind-the-scenes tweak, that will only impact viewers by maybe making them wait a month or two longer for season two. AI had churned this into a clickbait article, full of empty speculation, repetitive phrasing and formulaic sentences
Even worse, it came from a major entertainment site. It was so bland and obviously AI generated that it actually put me off reading anything else from them. Every day I see businesses using AI-created copy and imagery, and it doesn’t make me think you are a smart tech wiz. It makes me think you are too lazy to make good work yourself, too cheap to hire good people to do the jobs you can’t and too sloppy to notice the glaring errors and low quality. Which doesn’t make me want to spend my hard-earned money with you.
AI Can Spot Patterns But It Can’t Understand Meaning
AI works by finding patterns, not by understanding content or the wider context that sits around it. A well-known example comes from the medical field, where researchers trained an AI to look at scans and decide if a patient was sick or not. At first, the results looked impressive – the AI seemed to “know” when patients needed treatment.
But on closer inspection, it turned out the AI wasn’t analysing the content of the scans at all. The pictures used to train the AI had used examples of healthy scans from Hospital A, and examples of unhealthy scans from Hospital B, both of which used different typefaces to label their scans. So all the AI had learnt was that patients from Hospital A (who used Helvetica font on their labels) were usually healthy, while patients from Hospital B (who used Times New Roman) were sick. So when shown a clearly sick patient with a label using Helvetica, the AI said to discharge them, while healthy patients labelled with Times New Roman got diagnosed with life-threatening diseases.
Although it sounds ridiculous, it’s also a perfect example of how AI doesn’t understand context or meaning. It just mimics patterns. And when those patterns are off, the output, no matter how long or well-formatted, is meaningless or misleading, and potentially even dangerous.

You can find another good example of AI misinterpreting patterns right in your pocket. Your phone probably has a smart search that can find photos in your camera roll. Here’s what it found when I typed in Railway…
One actual video filmed out a train window, and one of a model train ride. Fair enough, that is sort of railway-related content. There’s also 2 photos featuring some tram tracks that I can let slide. But there’s also a photo of an airport, 3 helmet cam videos of me riding my bike and a random street that I have no idea why I took a picture of it. So of the 9 images found, only 1 is of an actual railway, 3 are railway-adjacent and 5 are completely unrelated. If you had an archivist with 11% success rate, you wouldn’t be very impressed.
The Cost of “Cheap” Content
AI tools are getting faster, cheaper, and more embedded into everyday tech. That makes it tempting to lean on them more, but for small businesses, what you are really selling is you and your expertise.
I can buy all the food I need at Tesco, but I shop at the greengrocer or butcher, because I trust that they’ve picked the best quality. They know what’s in season. They’ve got the experience to get something more interesting, tasty or better value than a massive supermarket that is driven by volume and margins.
It’s probably the same with your business. People aren’t just buying your product – they’re buying your experience, your judgment and your expertise. That’s what turns a passable product into an amazing one, and that’s exactly what gets lost in AI-generated content.
So when I land on a business’s site and it’s full of lifeless, generic writing, I feel a bit cheated. I’d much rather see spelling mistakes or clumsy copy that has clearly been written by a real person. I’d much rather trust someone with some slightly poorly lit mobile phone pictures on their site than an AI-generated image that I know for sure won’t be anything like the product or service I actually receive. Using AI-generated art or AI-written blog posts might save you time or money upfront, but it risks sending a damning message to your potential customers and clients: “I’m not invested enough in this to do it properly.”

Not mention that AI uses huge amounts of electricity and is already one of the biggest sources of pollution and emissions that drive climate change. Having a load of sustainability policies is great, but if you are regularly using AI then you might as well just chuck a load of plastic straws in the sea and chop down the nearest tree for good measure. We haven’t even got started on the ethics of using an AI trained on stolen copyright either…
Authenticity Sells – AI Doesn’t
Social media experts are always promoting authenticity as the best way to build your brand and the same goes for your website. People want to connect with other people. If your brand is built around you, then your content should reflect that. Your story and your insights are where you create value, and ChatGPT can’t replace that.
When small business owners use AI to write blogs or social posts, the result is usually content that feels like it “could be from anywhere.” If your business is unique, your content shouldn’t sound like it was churned out by a robot writing for half the other businesses in your industry as well. The content on your site helps potential customers understand what it’s like to work with you, or use your service, or buy your product. Authentic and human is way more attractive than AI slop, and as people become increasingly aware that so much content is AI, they’ll be willing to pay a premium for that human touch.
So When Should You Use AI?
This is not to suggest that there aren’t some good uses for AI.
If you are dyslexic or really struggle with words then it’s a great way to turn your bullet points into a longer article, especially if you then go through and finesse any bits that don’t sound quite right. Use an AI notetaker to record meetings and send out minutes rather than relying on keeping track of scraps of paper.
If you’ve got to archive 100,000 photos into different categories then absolutely get an AI to guess if it’s a flower or a person or car in each image. Although it didn’t do a great job on my railway image search, it did help me narrow down the number of images to look through quite considerably.
AI is also being used to find interesting patches of sky for space scientists to look at in more depth, or flag complex medical data for further review by doctors. It’s being used to process hours of audio recordings from railway sidings to work out what species of birds live there through analysing the birdsong in the background and help conservationists identify endagered populations. However, the key factor in all this good work is that a human eventually reviews it and takes action.
Don’t rely on AI to connect with your customers when the product your customers actually want is you. Sacrificing your authenticity and originality for banal and blandly professional is not going to get the leads and result you want, not matter how quick and easy it feels to create.
How We Use AI
I’ve yet to be impressed by any AI created text or imagery; the generic writing style is easy to spot, as are the obvious mistakes and repetitive content, so we’ll continue to prioritise imagery, content and design made by real people. There’s only two use cases I’ve found for AI that seem to be genuinely helpful and productive.
Code error messages can be really obscure, and debugging code to find the root cause can be painstaking and time-consuming work. Being able to review a code block with AI is great for finding those annoying typos, or work out why a piece of code isn’t doing quite what you expected, but I wouldn’t trust it write code from scratch or edit the code base directly. Another reasonable use I’ve found is creating patterns or abstract images, for example, I used an AI image when a client needed a very specific but obscure wood texture for a background image but the stock photo archive only had pine and oak planks.