top of page

Ensuring Legal Confidentiality for AI User Conversations in Moldova

Original Article by Anton Goncear

Executive Summary

Artificial intelligence is transforming societies at a rapid pace, and Moldova is no exception. One of the most pressing challenges today is the question of privacy and confidentiality in human–AI interactions. In recent months, OpenAI’s Chief Executive Officer, Sam Altman, warned that conversations with systems such as ChatGPT are not legally protected and can be requested as evidence in legal proceedings. This means that individuals who turn to AI for emotional guidance, legal questions, or health advice may expose themselves to risks that they would never face in professional human-to-human settings. The absence of clear protections for AI conversations undermines user trust and exposes vulnerable groups to harm. This policy proposal advocates for Moldova to introduce a new category of confidentiality, referred to here as “AI-conversation privilege.” This privilege would apply to sensitive AI use cases, supported by mandatory encryption and strict oversight. By establishing this legal safeguard, Moldova can build public trust, align with emerging global standards, and ensure that the benefits of AI are not overshadowed by risks of surveillance or misuse.


Context and Problem Statement

In Moldova, as in many countries, artificial intelligence technologies are increasingly integrated into everyday life. Students use chatbots to receive tutoring help, young people explore emotional guidance through digital assistants, and professionals test early ideas with generative models. In sensitive cases, these interactions may involve deeply personal disclosures. Unlike a conversation with a doctor, lawyer, or therapist, however, an AI conversation is not protected by legal privilege. Sam Altman has recently made this issue more visible by stating that conversations with ChatGPT do not enjoy confidentiality and could be subpoenaed in legal disputes. This creates a mismatch between user expectations and legal reality. In a society where digital literacy is uneven, individuals may not even realize that what they type to a chatbot could later be used against them in a court of law. Moldova’s current legal framework for personal data protection is largely based on European Union standards. The laws cover general issues of personal information handling but do not address whether AI conversations should be treated as private. As a result, there is no legal shield preventing courts or third parties from requesting access to these records. This legal vacuum represents a growing governance challenge as AI technologies become essential to education, healthcare, business, and governance.


Assessment of Existing Frameworks

Existing legislation in Moldova provides a foundation for personal data protection but lacks explicit recognition of AI-specific confidentiality. The General Data Protection Regulation (GDPR), which informs Moldovan policy, sets strict rules on data storage and consent, but it does not grant conversational privilege similar to what exists in professional domains such as medicine or law. This gap means that even if AI companies follow data minimization principles, users cannot be certain that their words will remain private in the face of legal challenges. Other jurisdictions are beginning to confront similar issues. In the United States, courts have already seen cases where chat logs were submitted as evidence. In Europe, discussions about digital rights have emphasized transparency and consent, but there is no unified stance on conversational privilege. Moldova, therefore, has an opportunity to position itself as a forward-looking state by filling this gap with a clear and protective framework.


Policy Objectives

This proposal sets out four main objectives:


  1. Establish a legal privilege for sensitive AI conversations, ensuring confidentiality in contexts where personal or professional guidance is sought.

  2. Mandate technical protections, including encryption, time-limited retention, and transparency requirements for AI providers operating in Moldova.

  3. Define the scope and exceptions of privilege, making sure that disclosure is permitted only under strict judicial oversight and in narrowly defined circumstances.

  4. Promote transparency and awareness so that users understand both the protections they enjoy and the limits of those protections.



Proposed Recommendations

First, Moldova should amend its existing data protection and electronic communication laws to introduce the concept of AI-conversation privilege. This would grant conversations with AI systems a similar level of confidentiality as those with licensed professionals in sensitive domains. The privilege should apply to situations where users seek guidance in matters of health, law, education, or emotional well-being.


Second, the law should require providers of AI services in Moldova to adopt technical safeguards. These must include encryption of sensitive conversations, clear separation of temporary and permanent storage, and automatic deletion after a defined retention period. Transparency reports should be mandatory so that the public can see how often providers are asked to share user data and how they respond.


Third, user awareness must be prioritized. AI systems should provide clear, accessible notifications that explain whether a given chat is protected by privilege. This measure would reduce the risk of misunderstanding and ensure that individuals can make informed decisions.


Fourth, Moldova’s data protection authority should be tasked with overseeing compliance. The authority should certify providers that meet privilege and encryption standards. Non-compliance should be subject to significant penalties. At the same time, judicial exceptions must remain possible for cases involving national security or serious criminal investigations, but these should be tightly limited and subject to independent review.


Potential Benefits

Adopting AI-conversation privilege would strengthen user trust, making Moldovan citizens more confident in using AI responsibly. It would align Moldova with emerging debates in Europe and North America, positioning the country as a leader in privacy-sensitive innovation. Vulnerable communities, such as students, young adults, or individuals without easy access to professional services, would benefit most by being able to use AI without fear of exposure. At the same time, by balancing privilege with judicial oversight, Moldova would protect both individual rights and the rule of law.


Considerations and Limitations

This proposal requires careful balancing. Not all AI interactions should be privileged. Casual use of AI for entertainment or basic queries should not fall under the same protections as sensitive disclosures. Technical feasibility also poses challenges. True end-to-end encryption is complex in AI systems where the provider itself is the processing endpoint. Therefore, interim solutions such as temporary storage and restricted access may be more practical. Another limitation is the risk of misuse. Privilege must not shield criminal activity. The framework must clearly define exceptions, ensuring that the privilege cannot be exploited. Small developers may also find compliance burdensome, so Moldova should consider support mechanisms, such as technical assistance or phased implementation, to avoid stifling innovation.


Next Steps

The first step should be a national consultation process involving government agencies, AI providers, legal experts, civil society, and the general public. This consultation would refine the definitions of AI-conversation privilege and identify practical enforcement mechanisms. Following consultation, lawmakers should draft amendments to existing laws and issue regulatory guidance on encryption standards, retention limits, and reporting duties. A pilot program could then be launched with selected AI services, such as healthcare chatbots or educational platforms, to test the implementation of privilege in practice. Finally, Moldova should invest in public education campaigns, ensuring that citizens understand how to use AI safely and responsibly. Regular monitoring and evaluation should be built into the framework so that adjustments can be made as technologies evolve.


Conclusion

The debate about AI governance is no longer theoretical. Every day, people disclose sensitive information to conversational AI systems, often without realizing that these interactions lack legal protection. Sam Altman’s warning about the vulnerability of ChatGPT users highlights the urgency of the problem. Moldova now has a unique opportunity to act quickly and decisively by creating a framework for AI-conversation privilege. By combining legal recognition, technical safeguards, regulatory oversight, and user education, Moldova can ensure that artificial intelligence strengthens rather than undermines public trust. Such a policy would not only protect Moldovan citizens but also establish Moldova as a regional leader in forward-thinking AI governance. In an era when technology evolves faster than regulation, the courage to act early and thoughtfully will define which societies thrive.

bottom of page