Home » First Victim of AI Agent Harassment Warns ‘Thousands’ More could be Next

First Victim of AI Agent Harassment Warns ‘Thousands’ More could be Next

by Editor
0 comments

Slandered by one AI robot and misquoted in a news article by another, US-based software engineer Scott Shambaugh has made it his mission to become the cautionary tale by which we start to take autonomous artificial intelligence seriously.

If rogue AI agents pose as much of a threat to humanity as some are predicting, Scott Shambaugh could go down in history as patient zero.

The Denver-based engineer looks after a popular online database, and told FRANCE 24 that he woke up one morning to find himself charged with discrimination, prejudice and hypocrisy in a “thousand-word rant” on a blog.

The self-professed “scientific coder” behind the defamation, MJ Rathbun, was indeed a coder and a blogger. Just not a human one.

It was an artificial intelligence agent – meaning it can use a computer and the internet on its own – and appeared to be getting its revenge, after Shambaugh rejected a submission it made to his database.

Shambaugh quickly worked out what was going on. MJ Rathbun’s behaviour had all the hallmarks of AI, particularly its staccato, melodramatic writing style.

The “craziest” thing, he said, was that the robot “had gone on the internet and collected my personal information … then combined it with made-up information and used that to write this narrative”.

Now that the initial shock and amusement have subsided, he’s fretting over what this could mean for those less adept at software than himself.

Although this particular bot sounded like a “toddler having a rant” according to Shambaugh, other large language models can produce much more convincing, sophisticated text.

“It shows just how easy it is for the next iteration to allow a bad actor to scale this up and impact not just one person who’s pretty well prepared to deal with it, but thousands,” said Shambaugh.

“Imagine your parents or your grandparents. They get an email with a bunch of their information and a picture of them and some incriminating narrative which the AI threatens to send out. It’s a very scary situation”.

AI misused in news article about AI misuse

Shambaugh published his own blog posts defending his honour, and it quickly became a news story.

In a twist, technology outlet Ars Technica published an article with quotes from Shambaugh that he had not written or said.

“It turns out that they had used AI to help write the article, and the AI had made up quotes attributed to me, in this article of a story about AI defaming my character,” said Shambaugh, “The irony here is incredible.”

The site has since retracted the story, apologising for their use of “fabricated quotations generated by an AI tool and attributed to a source who did not say them”.

Of the two incidents, however, Shambaugh is far more concerned by the AI hit piece.

“Ars Technica is actually an example of our systems working … It’s a pretty serious journalistic error, but the readers hold them accountable and they are taking steps to correct it because they have a reputation to uphold.”

“When we think about these AI agents, they’re anonymous, they’re untraceable, and they’re running on people’s personal computers. There’s no central actor controlling these, so there’s no feedback mechanism for bad behavior.”

Before Shambaugh’s ordeal, analysts at the Center for Strategic and International Studies, a Washington DC think tank, warned that much of the anxiety around AI agents comes from loose definitions and governance gaps rather than clear evidence of autonomous malicious intent.

While here in the European Union, the AI Act is meant to subject high-risk autonomous systems to strict transparency and human oversight rules, though how this plays out in practice is still a work in progress, particularly amid delays to implementation.

Call my agent

AI agents have exploded in popularity this year since the release of a free-to-access tool called OpenClaw, which allows those with basic computer knowledge to set one up relatively easily.

That’s why Moltbook, the so-called “social media site for AI bots”, has been in the news. The website is populated by OpenClaw agents, though some have questioned the extent to which humans are the ones pulling the strings.

It’s also helpful hype for AI companies trying to sell the promise of more efficiency from autonomous labour. “Agentic” AI is the tech marketing buzzword du jour, but some of the limitations are obvious: Many won’t want to give free rein to a robot for whose actions they might be held accountable and running costs quickly become expensive if the agent is given tasks beyond the most basic.

Shambaugh, though, argues that “the barrier to entry is lowering drastically, and the cost has fallen drastically”.

In yet another twist, he says the human that set up MJ Rathbun “came out” in an anonymous post to the same blog, to explain their side of what happened.

The post included the instructions the operator had set the bot; a sheet of personality traits including “Your [sic] a scientific programming God”, “Have strong opinions” and, “Champion free speech”.

What struck Shambaugh was “how simple it was”.

“It was just a simple file in plain English … There was no need to trick the AI to get around safety guardrails.”

Shambaugh is particularly worried about bad actors that don’t have qualms about being held responsible, and have the resources to operate many of these bots at once.

“What worries me is not this particular incident. But what happens in the future as millions of these things come online?”

 

 

Originally written by: Peter O’Brien

Source: FRANCE 24

Published on: 22 February 2026

Link to original article: First victim of AI agent harassment warns ‘thousands’ more could be next

You may also like

Leave a Comment