I’m a security expert – we’re all in danger over three eerie AI threats
ARTIFICIAL intelligence can be exploited by crooks in three dangerous ways, a cyber-expert has revealed.
As AI grows increasingly powerful, security insiders have told The U.S. Sun that gadget users must look out for three key threats.
It’s now extremely easy to access smart AI tech, from chatbots like ChatGPT, Google Bard, and Microsoft Bing to artificial intelligence apps that create photos in seconds.
Tech giants are pouring money into these apps, which now have millions of users around the world.
But with easy access to powerful tools, it’s easy to see how AI could be used by crooks – or how a simple mistake could end up costing you.
Paige Mullen, criminologist and cyber crime advisor at Advanced Cyber Defence Systems, told us of the worrying ways that AI could put you at risk.
AI-generated phishing attacks
The first is AI being used for phishing – scam messages that trick you into handing over personal info or money.
“AI is now being used to create engaging phishing emails,” Paige warned.
“If you ask ChatGPT to create a phishing email, you will receive a reply stating that it cannot respond to unethical requests.
“However, with a few prompts, phishing emails with suggestions of where to input malicious links are drafted, which are personalized and convincing, whilst also reducing the workload for the cybercriminal.”
Deepfakes
Next up is AI fakery, which is created to trick you into believing something that isn’t true.
“This is when AI is used to make videos, photos, and voice recordings which are fake but look authentic,” Paige explained.
“There are many issues with deepfakes as they can be used to create fake accounts and spread misinformation which can lead to societal problems.”
Data privacy
And the third issue relates to the info that you or someone else hands over to AI.
“There have already been many issues surrounding privacy concerns with AI,” Paige revealed.
“For example, Italy went to the extent of banning ChatGPT due to concern of its lack of authorization to gather data.
“An ordinary gadget user can input personal information into ChatGPT which can be stored.
“And what is more concerning, if they input any information about their place of work, this can be exposed.
“For example, sensitive information about Samsung was leaked three times due to employees using ChatGPT. “
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.