Home » AI Allows Hackers to Identify Anonymous Social Media Accounts, Study Finds

AI Allows Hackers to Identify Anonymous Social Media Accounts, Study Finds

New research suggests tech behind AI platforms such as ChatGPT makes it easier to perform sophisticated privacy attacks

by Editor
0 comments

AI has made it vastly easier for malicious hackers to identify anonymous social media accounts, a new study has warned.

In most test scenarios, large language models (LLMs) – the technology behind platforms such as ChatGPT – successfully matched anonymous online users with their actual identities on other platforms, based on the information they posted.

The AI researchers Simon Lermen and Daniel Paleka said LLMs make it cost effective to perform sophisticated privacy attacks, forcing a “fundamental reassessment of what can be considered private online”.

In their experiment, the researchers fed anonymous accounts into an AI, and got it to scrape all the information it could. They gave a hypothetical example of a user talking about struggling at school, and walking their dog Biscuit through a “Dolores park”.

In that hypothetical case, the AI then searched elsewhere for those details and matched @anon_user42 to the known identity with a high degree of confidence.

While this example was fictional, the paper’s authors highlighted scenarios in which governments use AI to surveil dissidents and activists posting anonymously, or hackers are able to launch “highly personalised” scams.

AI surveillance is a rapidly developing field that is causing alarm among computer scientists and privacy experts. It uses LLMs to synthesise information about an individual online which would be impractical for most people to do manually.

Information about members of the public that is readily available online can already be “misused straightforwardly” for scams, said Lermen, including spear-phishing, where a hacker poses as a trusted friend to get victims to follow a malicious link in their inbox.

With the expertise requirement to perform more developed attacks now much lower, hackers only need access to publicly available language models and an internet connection.

Peter Bentley, a professor of computer science at UCL, said there were concerns about commercial uses of the technology “if and when products come out for de-anonymising”.

One issue is that LLMs often make mistakes in linking accounts. “People are going to be accused of things they haven’t done,” warned Bentley.

Another concern, raised by Prof Marc Juárez, a cybersecurity lecturer at the University of Edinburgh, is that LLMs can use public data beyond social media: hospital records, admissions data, and various other statistical releases could fall short of the high standard of anonymisation necessary in the age of AI.

“It is quite alarming. I think this paper is showing that we should reconsider our practices,” said Juarez.

AI is not a magic weapon against anonymity online. While LLMs can de-anonymise records in many situations, sometimes there is not enough information to draw conclusions. In many cases, the number of potential matches is too large to narrow down.

“They can only link across platforms where someone consistently shares the same bits of information in both places,” said Prof Marti Hearst of UC Berkeley’s school of information.

While the technology is not perfect, scientists are now asking institutions and individuals to rethink how they anonymise data in the world of AI.

Lermen has recommended that platforms restrict data access as a first step: enforcing rate limits on user data downloads, detecting automated scraping, and restricting bulk exports of data. But he also noted that individual users can take greater precautions about the information they share online.

 

 

Originally written by: Isaaq Tomkins

Source: The Guardian

Published on: 8 March 2026

Link to original article: AI allows hackers to identify anonymous social media accounts, study finds

You may also like

Leave a Comment