Google's AI chatbot Gemini has recently launched an important update—the "Memory" feature, which allows the AI assistant to remember users' life information, work details, and personal preferences, thereby providing a more personalized service experience.
This new feature is currently only available to users subscribed to the $20 per month Google One AI Premium plan, and it is temporarily limited to the web version, with no availability on iOS and Android apps yet. It is worth noting that only English input is supported at this time.
Specifically, Gemini's memory feature can assist users in various practical scenarios. For example, when a user tells Gemini their favorite food, the next time they ask for restaurant recommendations, the AI can provide more targeted suggestions based on their taste preferences. Google has also showcased other practical examples in the interface, such as "use simple language, avoid jargon," "I only know JavaScript programming," and "include daily expenses when planning travel."
Google emphasizes that users can turn off the memory feature at any time, but stored memory content will need to be manually deleted to disappear. More importantly, a Google spokesperson clearly stated that this memory information will not be used for model training and will not be shared with others.
However, the security of such memory features is a concern. Earlier this year, a security researcher discovered that hackers could implant "false" memories in ChatGPT, thereby continuously stealing user data. This finding reminds us that the memory functions of AI systems require stricter security measures.
The launch of this feature reflects the trend of AI assistants moving towards more personalized and intelligent capabilities, but it also raises concerns about user privacy protection and data security. Balancing convenience while ensuring the safety of user data will be an important issue that AI service providers need to continuously address.