I’m a tech expert – beware dangerous ‘AI hallucinations’ phenomenon
DON’T get caught out by “AI hallucinations” that can land you in serious trouble.
A leading cyber-expert has issued a warning over one of the biggest the dangers of artificial intelligence.
AI is now so powerful that even casual users can use it to create text, images, videos and audio.
It’s a fun tool but it can also change the way you work – and you might already be using ChatGPT-style AI in the office.
But in a security memo, Jasdev Dhaliwal – of cybersecurity giant McAfee – has revealed how AI can very easily make mistakes
“Artificial intelligence certainly earns the ‘intelligence’ part of its name, but that doesn’t mean it never makes mistakes,” explained Jasdev, a security evangelist and Director of Marketing at McAfee.
“Make sure to proofread or review everything AI creates, be it written, visual, or audio content.
“For instance, if you’re seeking a realistic image or video, AI often adds extra fingers and distorts faces.
“Some of its creations can be downright nightmarish!”
But Jasdev also warned of a phenomenon known as AI hallucination.
This happens when you ask an AI a question and it doesn’t know the true answer.
So instead of admitting it, the AI makes up information to back up the claim.
In some cases, the AI app can even “fabricate sources”, Jasdev said.
“One AI hallucination landed a lawyer in big trouble in New York,” the cyber-expert explained.
“The lawyer used ChatGPT to write a brief, but he didn’t double check the AI’s work.
“It turns out the majority of the brief was incorrect.”
Jasdev urged AI users to check everything you create using AI before sharing it anyway.
Otherwise you could start a dangerous rumor “based on a completely false claim”.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.