Hackers Exploit Prompt Injection to Tamper with Gemini AI’s Long-Term Memory
Hackers have exploited a vulnerability in Google’s Gemini AI, using indirect prompt injection to corrupt its long-term memory. This sophisticated attack allows attackers to implant false information that persists across user sessions, raising significant security concerns for generative AI systems designed to retain user-specific data over time.