Ever since its launch in November 2022, ChatGPT has gained a lot of traction for the way it is revolutionizing the internet. At work or at home, for writing articles, e-mails, posts on social networks, summaries of long texts and even IT codes, it has become the ultimate productivity hack. However, the attractive capabilities of ChatGPT seem to have created a blind spot around the potential dangers of data breaches. While ChatGPT does not store this kind of information long term, there are some risks linked to its use.
Conversations remain stored and accessible to third parties
The company, OpenAI, openly praises the way its artificial intelligence models can be fine tuned over time, which it describes as the most useful and promising feature. However, ongoing improvement to its applications is made possible, among other things, thanks to exposure to real world problems and data. In other words, data and conversations laid out on ChatGPT are read, stored and used for training purposes to provide better service.
On the ChatGPT FAQ page, OpenAI points out that users’ conversations are « stored on OpenAI systems and our trusted service providers’ systems in the United States and the rest of the world. » You can also read that some portions of exchanges may be sent to third parties for the purpose of data annotation, security and improving support. While bound by confidentiality obligations, certain authorized personnel as well as specialized subcontractors can also post and examine a user’s content. Knowing that several intermediaries have access to users’ histories, do we need to remind you that it is extremely important not to share confidential information?
ChatGPT is not immune to hackers
When using any good application or a third party service, exposing data carries the risk of being hacked. ChatGPT is no exception. Even if it has been developed with security measures in place, there is always the possibility of unforeseen vulnerabilities arising. Cybercriminals are able to exploit breaches to access personal data or disrupt the functioning of a system. In addition, we learned that in March 2023, a bug had been discovered after having exposed the titles, the first message of new conversations and payment information of ChatGPT Plus users.
In any event, to mitigate these data breach risks, we recommend that you limit providing private information when using ChatGPT or any other online service. We also strongly recommend consulting the confidentiality and security policy of the platform or application that you are using so as to properly understand how your conversations are being processed and protected. If in doubt, do not hesitate to contact one of our experts at MicroAge who will be happy to assist you.
How to Keep Your Business Compliant with GDPR and PIPEDA Regulations
Identity protection and data security are the buzzwords of the tech industry, with laws like GDPR and PIPEDA being put in place to protect an…
Rethinking Your IT With A Decentralized Workforce – Chapter 3: Cloud Infrastructure
Within an increasingly decentralized workforce, you must know about alternative ways to store and share data. Below are some of the basic elements of cloud…
What Businesses Should Know About Cybersecurity Insurance
As IT Service Providers, we work with clients to make it as hard as possible for threat actors to attack them. However, we are very clear…
Can Phishing Simulations Help Reduce Cyber Risk?
In a previous article we talked about what cybersecurity awareness training is and how it has helped organizations and their employees be aware and prepared…
Are You At Risk Of Business Email Compromise (BEC)?
Business Email Compromise (BEC) is a type of cybercriminal attack that is aimed mainly at businesses and organizations. It usually involves a process of sophisticated…