Beware the AI Boyfriend: How Cybercriminals Weaponise Virtual Romance

Beware the AI Boyfriend:  How Cybercriminals Weaponise Virtual Romance

The world of artificial intelligence is evolving rapidly, but with it comes new dangers. Security experts are raising alarm bells about the rise of "weaponised" AI relationships, where cybercriminals use virtual boyfriends and girlfriends to manipulate and defraud unsuspecting victims.

These AI relationships, often found within chatbot apps, can be incredibly convincing. While many apps don't explicitly advertise this feature, users can often trick regular chatbots into roleplaying as romantic partners simply by asking the right questions. This seemingly harmless fun, however, hides a sinister potential.

"Deepfake technology has made remarkable leaps in recent years," explains Jamie Akhtar, Co-Founder and CEO of CyberSmart. "While virtual partners may still appear slightly robotic or uncanny, the technology is rapidly improving. The problem lies in the malicious potential of this technology."

Cybercriminals, long masters of manipulating human emotions, have taken social engineering to a new level with deepfakes and "griefbots." These AI bots impersonate real people, often loved ones who have passed away, to exploit the vulnerable. The emotional investment that people develop with AI, while potentially harmless with safe chatbots, creates a dangerous vulnerability when exploited by malicious actors.

"It's easy to envision a scenario where cybercriminals use griefbots or AI partners to extort money from a victim or trick them into downloading malicious software," Jamie warns. Real-world examples already exist. Earlier this year, a finance worker at a multinational company was duped into transferring $25 million to cybercriminals posing as the company's CFO using deepfake technology.

This trend is likely to worsen as deepfake technology becomes more accessible to a wider range of criminals. "We expect to see these attacks becoming increasingly common, targeting both individuals and businesses," says Jamie.

To protect yourself, it's crucial to be cautious when interacting with chatbots. Stick to well-established and reputable apps, and avoid downloading chatbots from third-party app stores or suspicious websites. Even official chatbots may not be completely secure.

Chris Hauk, Consumer Privacy Advocate at Pixel Privacy, emphasizes the importance of being wary of oversharing with any chatbot. "These apps collect vast amounts of personal data, often sharing it with third-party companies, including those in China and Russia." He highlights the lack of transparency regarding data sharing and the specific AI algorithms used by many of these apps.

The key takeaway is to treat all online chatbots as potential strangers. Refrain from sharing sensitive information, avoid revealing your identity, and never agree to send money. Remember, if something seems too good to be true, it probably is.

The evolving world of AI offers incredible potential, but with it comes a responsibility to remain vigilant and informed about the emerging threats. By understanding the risks and exercising caution, we can enjoy the benefits of AI while safeguarding ourselves from those who would exploit its power for malicious gain.