After shocking Google Gemini AI images debacle, CEO Sundar Pichai speaks tough to staff

[ad_1]

Just days after Google Gemini’s inaccuracies in generating AI images emerged, Alphabet CEO Sundar Pichai said the results generated were “completely unacceptable,” while admitting that their AI chatbot s was deceived. The issue arose when Gemini on X users shared screenshots of Gemini that inaccurately depicted people of color, while refusing to generate images of white people.

Gemini image generation controversy

The problems with Gemini began last week as more cases emerged where the AI ​​chatbot was generating inaccurate depictions of people. In another problematic instance, he compares Elon Musk’s influence to that of Adolf Hitler, sparking controversy. According to a Semafor reportSundar Pichai, CEO of Alphabet, addressed the team behind AI chatbot Google DeepMind, admitting Gemini’s mistakes and saying such issues were “completely unacceptable.”

“I know some of his responses offended our users and showed bias – to be clear, this is completely unacceptable and we were wrong,” Pichai said. He also confirmed that the team behind it was working around the clock to resolve the issues, saying they had seen “substantial improvement across a wide range of prompts.”

Check out Sundar Pichai’s tough words with staff below:

“I want to address recent issues with problematic text and image responses in the Gemini (formerly Bard) app. I know some of its responses have offended our users and demonstrated bias – to be Clearly, this is completely unacceptable and we were wrong.

Our teams have been working around the clock to resolve these issues. We’re already seeing substantial improvement across a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will maintain it for as long as it takes. And we will look at what happened and make sure we correct it on a large scale.

Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We have always sought to provide users with useful, accurate and unbiased information about our products. That’s why people trust them. This must be our approach for all of our products, including our emerging AI products.

We will pursue a clear set of actions, including structural changes, updated product guidelines, improved release processes, robust assessments and red teaming, and technical recommendations. We are reviewing all of this and will make any necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in AI over the past few weeks. This includes some fundamental advances in our underlying models, for example our breakthrough of a million long-context windows and our open models, both of which have been well received.

We know what it takes to create great products that are used and loved by billions of people and businesses, and with our infrastructure and research expertise, we have an incredible springboard to the AI ​​wave. Let’s focus on what matters most: creating useful products that earn our users’ trust.”

One more thing ! We are now on WhatsApp channels! Follow us there to never miss any updates from the tech world. ‎To follow the HT Tech channel on WhatsApp, click on here join now!

[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *