Google seems to fear its Bard can leak confidential info, reportedly tells employees to be wary
Google is worried about privacy and security, allegedly advises employees to be extra careful when using Google Bard
It seems Google has found some privacy and security issues that could arise when employees are using Google Bard. For one, the company’s reportedly told its developers to not use code that chatbots can generate (the code-generating feature of Google Bard was showcased during the Google I/O conference). Mainly, the problem seems to be company secrets. If you’ve been following the mobile tech world (or the tech world in general), you’ve seen the amount of substantial leaks that have been going on in the last few years, spoiling big product reveals.
Basically, if employees enter confidential info into Bard or ChatGPT, this info can become public. The same applies to strings of code, which could compromise the security of the code, showcasing it to potential hackers that could take advantage of it.
Other companies such as Samsung and Amazon also reportedly have guardrails when it comes to AI.
In a comment to Reuters, Google said that it strives to be transparent about Bard’s limitations, and says then when it comes to code, Bard can be a helpful tool although sometimes it may make undesired suggestions.
Meanwhile, Google is reportedly in talks with Ireland’s Data Protection Commission, after the delay of Bard’s launch in the EU (again, over privacy concerns the Ireland regulatory body has).
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.