“Man bites robot dog."

Which was the headline that popped into my head after reading freakish news from California's state capital about an AI-equipped mechanical mutt being credited for solving a crime that, as it happened, never occurred.

True then, true now: “All I know is what I read in the newspapers.”

Actually, maybe not so much now, thanks to sadly declining newspaper readership.

But no need to go there, and, while we’re at it, no need to serve up yet another tediously predictable AI rant.

After all, LLM technology, entirely based on predictive pattern recognition, could likely deliver a reasonably serviceable level of bitter all by its own damn self.

But here’s the question that hallucinating robotic canines does bring to the fore: if the companies pushing AI are going to be responsible for changing everything, shouldn’t they change it responsibly?

To put this in the narrow frame of my chosen profession: if you're going to fuck with advertising, shouldn't you fucking make it better?

Sorry for the spicy, but really.

We see the hazards all too clearly: decimated creativity and declining originality. Frictionless production, symbiotically linked to programmatic least-cost/lowest-value media purchasing.

Vastly amplified, ergo even more universally ignorable, message overload.

The same shite, only doubled down, that’s resulted in close to zero advertising-attributable brand growth among major brands over the last 16 all-digital dog years.

The middle finger of blame for all of this, of course, doesn’t rest solely, or even disproportionately on either the soulless digital platforms or, more recently, their new AI-inventing counterparts.

The fault, dear clients and agency colleagues, lies not only in our magnificent seven superstars, but in ourselves.

But instead of backing up over roadkill, maybe we start asking ourselves, “what means better?” 

How about this for a spitball: we start by forever signing off on the illusionary hype-cycle, and settle on a legitimate quality-centric understanding of what present-level AI can and should (and can’t and shouldn’t) be asked ot do.

Then we, as an industry, engage with the developers, as an industry, and talk about how we address the mile-wide and inch-deep crap level in way too much of the output.

Finally, we recognize this is both a “learnable” and “teachable” moment and that, just as Apple in its early glory days, did a whole lot of educating about a whizzy new technology, the same opportunity — and need — exists today.

Fantasy?  Maybe, maybe not. 

But if you listen closely, what the robot dog in the news is saying is painfully clear.

If we allow ourselves to believe that good enough is good enough — in AI, or anything else — we’re just barking mad.

Previous
Previous

Bagel Overload.

Next
Next

1.Input: