Are you confident that your deleted conversations with AI chatbots are really gone? A landmark lawsuit between The New York Times (NYT) and OpenAI reveals a troubling reality: “deleted” data might still be stored, analyzed, or exposed in ways you never intended or consented to.
Your Deleted Data Isn’t Always Deleted
According to The Verge, OpenAI has stored deleted conversations, despite users’ expectation of privacy. This practice is currently being challenged in court and has sparked debate around what “delete” really means in the context of AI chatbots and cloud-based interactions.
Clarification and Critique: OpenAI asserts that it stores some data due to legal obligations—such as preserving evidence for lawsuits. This is not a permanent policy for all deleted data, and OpenAI says it is committed to user privacy. Nevertheless, this highlights the ambiguity and potential risk surrounding deletion in AI systems.
For beginners: When you delete a conversation with tools like ChatGPT, you might expect it to be gone forever. However, technical and legal reasons can sometimes delay or prevent permanent deletion, especially when data is “preserved” for ongoing investigations.
Why This Matters to You
If your organization uses non-EU cloud AI services, there’s a risk that sensitive data may be stored in ways that do not align with strict European data protection rules.
- Critical Questions:
- Does your organization confirm that AI providers truly delete data upon request?
- Are the services you use certified for information security and privacy?
- Has anyone audited what happens to your data after you believe it’s been deleted?
Nuanced Context: While concerns are valid, most major cloud providers have detailed deletion and privacy policies designed to meet various international regulations. Rather than assuming all companies are non-compliant, organizations should conduct provider-specific audits and risk assessments.
Moving Forward Responsibly
The lesson is clear: in the world of AI, “deletion” is complex and “privacy” is never automatic. Organizations must go beyond mere compliance—implementing robust data governance frameworks and regularly reviewing their AI providers’ privacy and security measures.
Before deploying new AI tools:
- Evaluate whether providers are certified and transparent about data handling.
- Understand your own liability and accountability if “deleted” doesn’t mean “gone.”
Industry Efforts: Many organizations are already improving their practices. Industry standards in areas like algorithmic transparency, ad tracking (recent EU rulings here), and AI education are quickly evolving.
What can I do?
For Individuals
Understand the Platform’s Data Policy
- Before using AI chatbots (like ChatGPT), review their privacy policy to see how your data is stored, deleted, and used.
Be Cautious With Sensitive Information
- Avoid sharing sensitive Personally Identifiable Information (PII) in AI chats where possible.
Exercise Your Rights
- If you are in the EU (or protected by GDPR), use your “right to be forgotten” and request permanent data deletion when necessary.
Stay Informed
- Keep up to date with developments on AI governance and data privacy laws.
For Organizations
Vetting AI Providers
- Choose AI and cloud vendors with recognized security certifications (e.g., ISO/IEC 27001, 27018). Ask for documentation and third-party audits.
Clarify Data Deletion Processes
- Request detailed explanations on how deletion requests are processed, who oversees them, and if there are exceptions (such as legal holds).
Regular Audits and Training
- Conduct privacy and security audits regularly. Educate your staff about AI best practices and algorithmic transparency.
Update Your Data Governance Framework
- Ensure your policies go beyond compliance checkboxes. Address the nuances of modern AI systems, including temporary and backup data storage.
Respond to Emerging Regulations
- Stay ahead of new legal requirements (such as GDPR updates or recent EU court rulings on tracking), and adjust your data practices accordingly.