Google records your conversations with Gemini for years by default

[ad_1]

Do not type anything in Geminithat of Google GenAI family of applicationsit’s incriminating – or something you wouldn’t want anyone else to see.

This is today’s public service announcement (of sorts) from Google, which in a new supporting document describes how it collects data from users of its Gemini chatbot apps for web, Android and iOS.

Google notes that human annotators regularly read, tag and process conversations with Gemini – even if the conversations are “disconnected” from Google accounts – to improve the service. (It is unclear whether these annotators are in-house or outsourced, which can be important when it comes to data security; Google doesn’t say.) These conversations are retained for up to three years, along with “associated data” such as the languages ​​and devices used by the user and their location.

Now, Google offers users a few control over what Gemini-relevant data is retained – and how.

Disable Gemini app activity in Google My activity The dashboard (it’s enabled by default) prevents future conversations with Gemini from being saved to a Google account for review (meaning the three-year window won’t apply). Individual prompts and conversations with Gemini, meanwhile, can be removed from the Gemini app activity screen.

But Google says that even when Gemini app activity is disabled, Gemini conversations will be saved to a Google account for up to 72 hours to “maintain the safety and security of Gemini apps and improve Gemini apps.”

“Please do not enter confidential information in your conversations or any data that you would not want a reviewer to see or for Google to use to improve our products, services, and machine learning technologies,” Google writes.

To be fair, Google’s GenAI data collection and retention policies don’t differ that much from its competitors. OpenAI, for example, records all chats with ChatGPT for 30 days, regardless of whether ChatGPT’s chat history feature is turned off, except in cases where a user is subscribed to an enterprise-level plan with a retention policy personalized data.

But Google’s policy illustrates the challenges inherent in balancing privacy and developing GenAI models that feed on user data to self-improve.

GenAI’s liberal data retention policies have recently put providers in a sticky situation with regulators.

Last summer, the FTC request detailed information from OpenAI on how the company verifies the data used to train its models, including consumer data, and how that data is protected when accessed by third parties. Abroad, the Italian data privacy regulator, the Italian Data Protection Authority, said that OpenAI lacked a “legal basis” for the mass collection and storage of personal data to train its GenAI models.

As GenAI tools proliferate, organizations are increasingly wary of privacy risks.

A recent investigation Cisco found that 63% of companies have set limits on the data that can be entered into GenAI tools, while 27% have banned GenAI altogether. The same survey found that 45% of employees entered “problematic” data into GenAI tools, including employee information and non-public files about their employer.

OpenAI, Microsoft, Amazon, Google and others offer GenAI products aimed at businesses that explicitly don’t do it retain the data for an indefinite period, whether for model training or any other purpose. However, as is often the case, consumers are the least affected.

[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *