What happen if our conversations with AI chatbots are exposed or revealed?
From identity theft to extortion and espionage, ESET analyzes what data is shared with chatbots, how it can be leaked, and how to reduce this risk
Interaction with chatbots (ChatGPT, Gemini, Copilot, Claude, Perplexity, among others) has come to be treated as an intimate and secure space. Emotional, psychological, work-related, and medical concerns are entrusted to them. ESET, a leading company in proactive threat detection, analyzes what type of information is typically shared with AI chatbots, how it could be exposed, and what the real impact of a leak could be. Furthermore, they share digital best practices for continuing to use these tools without putting oneself at risk.
“It’s not new that many people use chatbots as if they were private spaces. Using them in this way contradicts the nature of these tools, since the platforms themselves emphasize that conversations can be stored, analyzed, or reviewed to improve the service. Chatbots were not designed as a confidential space, even though the conversational experience might lead one to think otherwise. While the main AI platforms claim to implement security and privacy measures (access controls, monitoring, infrastructure protection), this does not eliminate the risk of data breaches, nor is it synonymous with invulnerability,” highlights Martina Lopez, Cybersecurity Researcher at ESET Latin America.
When using them as personal assistants or even advisors, personal and sensitive information is often shared almost without realizing it. This includes:
- Personal data. Sensitive information such as name, age, city, and country, but also daily habits: where you work, who you live with, and your family composition. This information, combined and in the wrong hands, can be dangerous.
- Work-related information. Driven by the need to “Help me improve this,” many users share internal emails, contracts, reports, presentations, business strategies, campaigns, customer and supplier details, conversations, and support tickets. They also share source code and internal architectures.
- Medical, psychological, or emotional consultations. Chatbots are also perceived by many people as advisors or specialists (a practice that can be dangerous). Health-related issues are shared, such as symptoms, diagnoses, and medications, as well as personal matters like relationship conflicts, grief, questions they wouldn’t ask on other social media platforms, or requests for advice.
- Opinions, beliefs, and sensitive stances. Chatbots receive opinions from users with political or religious ideologies, views on companies, bosses, or colleagues, and also information that, taken out of context, can cause reputational damage.
“The problem isn’t what’s shared, but rather that false sense of intimacy and privacy, which can be shattered very easily. Months of conversations build a profile that can be extremely valuable to a cyberattacker,” adds Lopez from ESET.
The information shared with AI chatbots can be exposed and fall into the hands of cybercriminals for various reasons. The main one is if someone gains access to the account. This can happen by accessing the password, falling victim to a phishing attack, or using the same password for multiple services. Another reason is manipulated chatbots, which can be tricked with malicious prompts by cybercriminals to obtain user information. Furthermore, accepting terms and conditions without reading them is another risk, since chatbots collect and store usage information, such as history and conversations, to train their language model by default. It’s also important to consider potential security breaches, platform errors that expose user conversations and history, or if an extension or app is monitoring too much. For example, if a plugin is installed to enhance the chatbot’s capabilities and that app malfunctions, is vulnerable, or is malicious, the conversation could slip out of the main provider’s control.
The 5 key risks associated with a chatbot leak, according to ESET
- Identity theft / Social engineering: Chatbot conversations provide human context. Cyberattackers can thus obtain information about habits, interests, routines, services used, problems that transcend these, and even the tone of voice used. This allows for much more personalized attacks, through emails or messages that appear to be written by someone familiar, scams that include real-life details, or identity theft that is much harder to detect.
- Corporate espionage: Since many users rely on chatbots for work support, attackers can obtain confidential information such as strategies, documents, internal decisions, customer information, and pricing and/or product details. Beyond the legal risks this situation may entail, it can also represent a competitive advantage for third parties or the breach of certain contractual obligations.
- Reputational damage: If private opinions, professional doubts, or intimate thoughts are exposed, the consequences can range from workplace conflicts to a loss of professional credibility.
- Exposure of sensitive data: These types of chatbots are also used as a space for intimate consultations and often contain personal information such as symptoms, diagnoses, treatments, religious or political beliefs, and personal or family conflicts. If this information were leaked, the impact on the victim could be devastating: stigmatization, discrimination, and even emotional distress.
- Extortion: When a cyberattacker has access to private information, they can exert pressure through credible threats and personalized blackmail. The goal? To obtain some kind of financial gain from the victim.
A good way to reduce the impact of exposed conversations is to adopt best practices when interacting with these chatbots. A highly useful preventative checklist
- Do not share personal data (ID number, date of birth, email, phone number) is ESET’s top recommendation.
- Anonymize real-life situations (change names, companies, locations).
- Do not attach sensitive documents, confidential information, or credentials.
- Review privacy settings (what is saved, what is used for training).
- Protect your account with a strong password and two-factor authentication.
- Use different accounts for work and personal use.
- Ask yourself: Would I say this out loud in a room with strangers?
“The comfort of a fluid, natural, and unjudged conversation makes us lower our guard and share information we would never publish in any other digital space. A leak of conversations exposes not only information but also routines, vulnerabilities, decisions, and emotions. However, this scenario should be taken as an invitation to understand what these platforms are and what they are not. They are not confidential spaces, personal advisors, or vaults of sensitive information. They are powerful tools, but like all technology, they require discretion, boundaries, and responsible digital habits,” concludes the ESET researcher.
ESET invites you to learn more about cybersecurity by visiting: https://www.welivesecurity.com/es/.
For other useful preventative information, also available in Venezuela: https://www.eset.com/ve/, and on their social media channels @eset_ve. Also on Instagram (@esetla) and Facebook (ESET).
Information and image provided by ESET and Comstat Rowland
Follow our news on Google! For current, interesting, and accurate information, click here to see all the content on Bitfinance.news. You can also find us on X/Twitter and Instagram
