Imagine intelligent software that can impersonate anybody and use stolen identities to defraud companies of million dollars; an AI bot that can trawl through billions of internet pages, social media profiles and dark web databases looking for useful data to use in deceiving its victims.
Once it finds a suitable victim, the bot sends them a forged email message pretending to be someone they trust, like a family member or co-worker. The objective: to trick them into revealing sensitive information it can use to steal from them.
Harvesting the email account passwords, credit card details and bank account logins of thousands of victims, this AI-powered virtual criminal can steal millions of dollars in mere hours.
The future of crime?
A robot smart enough to outwit humans and steal our identities is a really disturbing concept. Sounds like dystopian science fiction doesn’t it? But criminal syndicates and hackers are already starting to use AI weapons in cybercrime attacks, defrauding companies of millions of dollars, and the severity and sophistication of attacks are expected to increase sharply in the next few years.
“As AI capabilities become more powerful and widespread, we expect the growing use of AI systems to lead to the expansion of existing threats and the introduction of new threats,” states a report jointly published by Oxford and Cambridge Universities earlier this year.
Titled “The Malicious Use of Artificial Intelligence,” the report examines current trends in AI technology and makes sobering predictions about the way it could be exploited by malicious actors to scale up their fraud operations. At the moment the most damaging cybercrime is carefully orchestrated by human operatives who manipulate communication channels like email to deliver malware and phishing attacks, but AI has the potential to automate that process.
“The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labour, intelligence and expertise,” the report speculates.
The report also looks ahead to a future with highly advanced synthetic human avatars. “The ability to generate synthetic images text and audio could be used to impersonate others online, or to sway public opinion by
distributing AI-generated content through social media channels… Many tasks involve communicating with other people, observing or being observed by them, making decisions that respond to their behaviour, or being physically present with them. By allowing such tasks to be automated, AI systems can allow the actors who would otherwise be performing the tasks to retain their anonymity… There has recently been significant progress in developing speech synthesis systems that learn to imitate individuals’ voices. Such systems would open up new methods of spreading disinformation and impersonating others…”
Weapon or tool?
So what do these predictions mean for cybersecurity? Are we entering a near future where AI avatars impersonating the voices of people we trust can con us into doing their bidding? Or will the accelerating development of AI actually give us better tools to secure our networked society?
There is a strong emphasis on AI implementation in the research community and cybersecurity industry. Security experts recognise the necessity to create AI-powered tools to counter the inevitable use of AI as a weapon in crime.
The way the AI revolution plays out in the realm of cybersecurity will hinge on the timely adoption of AI-powered security by companies and government agencies so that online criminals are prevented from gaining the upper hand.
… … …
Discover Uncloak
Experience a demo of Uncloak’s AI-powered cybersecurity platform right now: go to https://demo.uncloak.io/
Please subscribe to our social channels to keep up with the latest Uncloak news:
Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
https://medium.com/@Uncloak.io/ai-and-cybercrime-dangerous-weapon-or-ultimate-defence-a93de509b735