Meanwhile, back in adland.

And, you’re damned right I’m having a hard time thinking it matters in the face of the unrealities unfolding in real time on American streets.

Except there’s this: the future of what, mounting evidence to the contrary, I still think of as “humanity.”

And that, apologies all around, brings us to AI, which, quite frankly, has become tiresome. Personal POV: It really is time for the technology to put up or shut up.

But that’s related to all the posing, bloviation, and ersatz bedazzling going on. When the focus turns to how it affects thee, me, and all the neighbors, we might want to stay tuned in.

Case in point: last week, Carl Loeb (no relation, but a hell of a sweet and talented guy; hate him for getting all the good looks in the non-family) posted on LinkedIn about a “constitution” written by good people at Anthropic to guide Claude, their AI product.

I was intrigued enough to read the whole thing. This was no mean feat: it’s a TL; DR document with all the page-turning drama of your basic terms & conditions.

Because that's what it is: a “constitution” setting out the terms governing Claude’s operating conditions. 

In fairness, it’s also a combo manifesto, wish-list, time capsule, and love note to the LLM Anthropic is so busily creating.

At the risk of complete anthropomorphization, it totally brought to mind the Yiddish blessing, “zolstu hobn naches fun di kinder” (“may you draw pride and joy from your children”).

Still, a whole lot of it also triggered a full-on pharyngeal spasm, aka a gag reflex. All because Anthropic clearly believes they are bringing a new life form into being.

Three reasons that’s not (entirely) Luddite-speak.

Right out of the gate, they note the audience for the constitution is Claude, its own damn self. “Sophisticated AIs,” they write, “are a genuinely new kind of entity, and the questions they raise bring us to the edge of existing scientific and philosophical understanding. Amidst such uncertainty, we care about Claude’s psychological security, sense of self, and well-being.”

Is that the opening notes of the Twilight Zone, we're hearing?

Second thing is that they aren’t entirely optimistic that their guidance will do the trick. “And while we want Claude’s ethics to function with a priority on broad safety and within the boundaries of the hard constraints, this is centrally because we worry that our efforts to give Claude good enough ethical values will fail.”

Oh joy. 

Last relates to the “hard constraints” mentioned above. All prefaced by the point that not everyone on planet AI prioritizes safety, positive outcomes, and hearts and flowers.

Their hard truth: you can’t prevent bad actors from acting badly. Then, going on to list where said bad acting might occur in chilling nuclear, biological, and chemical detail.

As the remarkable Bob Brihn’s dad used to observe, “a lock only keeps out honest people.”

You want to check out Anthropic's "Constitution?" – anthropic.com/constitution

You want to talk about it? Let's.

Next
Next

Minneapolis.