How to spot AI-generated social media accounts before they fool you

April 14, 2025
16
30

SOCIAL media has become a battleground for real and artificial identities, with artificial intelligence (AI)-generated accounts spreading misinformation, boosting engagement numbers, or even impersonating real people.

These accounts can be surprisingly convincing. However, a closer look often reveals subtle inconsistencies that give them away.

Whether they are promoting fake news, scamming users, or manipulating online conversations, it is crucial to know how to distinguish between real people and artificial entities.

By learning to recognise the warning signs, you can protect yourself from being misled and help maintain a more authentic online space. Here’s what to watch out for.

Taylor’s University, School of Media and Communication (SOMAC), Faculty of Social Sciences and Leisure Management senior research fellow Associate Professor Dr Massila Hamzah emphasised that AI-generated content poses risks to individuals, especially when used for deceptive or malicious purposes.

"Engaging with AI’s imaginative potential should be balanced with awareness of its ability to fabricate misleading or deceptive content," she told Sinar Daily.

According to Massila, staying informed about the risks and challenges of AI-generated content can help users navigate social media safely.

Meanwhile, Malaysian Research Accelerator for Technology and Innovation (MRANTI) Innovation Commercialisation Head and AI expert Dr Afnizanfaizal Abdullah stressed that distinguishing AI-generated accounts from real users on social media can be challenging.

However, he noted that several key red flags can help identify them.

Below are key warning signs that an account may be AI-generated:


Suspicious profile details and unrealistic images

AI-generated accounts often have incomplete, generic, or inconsistent profile details. They may use stock images, deepfake-generated faces, or profile pictures that appear overly polished with no imperfections.

"Sometimes they use stock images or randomly generated usernames," Afnizanfaizal said.

If an image looks too perfect or has visual distortions, such as mismatched earrings or asymmetrical facial features, it may have been created using AI.


Low follower count but following many accounts

Many AI-generated accounts attempt to appear legitimate by following a large number of users while having very few followers in return.

This pattern suggests an attempt to build credibility quickly.

Afnizanfaizal also stressed that social media users should be wary of accounts that gain followers at an unnatural rate, especially when most of their followers seem inactive or fake.


Generic or repetitive engagement patterns

AI-generated accounts tend to interact with content in an unnatural way.

Afnizanfaizal shared that comments may be vague, overly positive, or repetitive, such as “Great post!” or “Thanks for sharing!” without meaningful engagement.

"Users should carefully observe specific signs of inauthenticity, such as inconsistencies in profile details, unnatural or repetitive engagement patterns and overly polished images that lack imperfections," he said.

These accounts often flood social media with a high volume of posts in a short period, making their activity seem excessive or unnatural.


Unusual language, awkward phrasing, or robotic tone

Afnizanfaizal noted that AI-generated accounts often exhibit language inconsistencies.

These include awkward phrasing, outdated slang, or grammatical errors. Some may also sound overly formal or robotic, lacking the natural flow of human communication.

Paying close attention to logical inconsistencies and unusual communication patterns can ultimately help in identifying fraudulent accounts.

If an account’s responses feel unnatural, it could be AI-generated.


Suspicious connections and amplification of specific messages

Some AI-generated accounts are part of larger networks designed to spread misinformation or manipulate online discussions.

These accounts often promote specific narratives, use the same hashtags repeatedly, or engage in coordinated activities to amplify a particular message.

A sudden spike in followers, especially from questionable sources, is another red flag.


Posting at odd hours and automated behaviour

AI-driven accounts do not follow typical human activity patterns. They may post content at all hours of the day, including during times when most real users are inactive.

Some may also exhibit high engagement with low-quality or sensationalist content, further signalling automated behaviour.


Lack of verification does not mean authenticity

Meanwhile, Massila said while verified accounts (those with a blue checkmark) are less likely to be fake, the absence of verification does not necessarily indicate authenticity.

"Users should utilise verification tools designed to detect AI-created faces, mismatched profile details, and suspicious posting patterns," she said.

Many AI-generated accounts operate under real-sounding names and bios, making them harder to detect. Cross-checking information across multiple platforms can help determine an account’s legitimacy.


Using AI detection and fact-checking tools

Afnizanfaizal said to protect against AI-generated deception, users should utilise tools like Botometer and Followerwonk, which analyse engagement patterns and detect bots.

"Cross-checking profile information with other sources or platforms can help verify authenticity," he said.

Fact-checking AI-generated faces, profile inconsistencies and suspicious posting behaviours can also prevent falling victim to misinformation.

Verifying information across multiple platforms and examining engagement patterns can be key in identifying AI-generated imposters from genuine users.

By remaining cautious, cross-referencing information, and leveraging available detection tools, social media users can better protect themselves from AI-generated accounts designed to manipulate, deceive, or spread false narratives.

16
30