Cuteness Hides The Sinister Truth About Google’s AI Image Tool
Two years before Imagen was released, Google fired its AI ethics researcher, Timnit Gebru, due to a dispute over a research paper discussing the potential problems of large language models, as reported by Technology Review. According to the research paper, one of the potential risks of large language models is human bias. Since large language models collect data made by humans from the internet, they could mimic human behavior such as sexism and racism. How does it relate to Google’s AI Image tool? Imagen uses a pre-trained language model to generate realistic images.
In fact, Google acknowledges the potential risk of Imagen on its page, under “Limitations and Societal Impact.” “While a subset of our training data was filtered to remove noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models.” Because of those reasons, Google clarifies that it will not release its AI image tool for public use.
Google also says that it doesn’t portray people in the images because the AI image tool tends to reinforce gender stereotypes and “bias towards generating images of people with lighter skin tones,” evidenced by the fact that many Imagen creations Google has released only feature cute animals. Similarly, DALL-E-2 is restricted to a few users because of the same potential risks (via Vox). Despite limitations, Google is working to improve its AI image tool in the future.
For all the latest Gaming News Click Here
For the latest news and updates, follow us on Google News.