Skip to main content

Is ChatGPT safe? Here are the risks to consider before using it

A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.
Rolf van Root / Unsplash

For those who have seen ChatGPT in action, you know just how amazing this generative AI tool can be. And if you haven’t seen ChatGPT do its thing, prepare to have your mind blown! 

There’s no doubting the power and performance of OpenAI’s famous chatbot, but is ChatGPT actually safe to use? While tech leaders the world over are concerned over the evolutionary development of AI, these global concerns don’t necessarily translate to an individual user experience. With that being said, let’s take a closer look at ChatGPT to help you hone in on your comfort level.

Privacy and financial leaks

In at least one instance, chat history between users was mixed up. On March 20, 2023, ChatGPT creator OpenAI discovered a problem, and ChatGPT was down for several hours. Around that time, a few ChatGPT users saw the conversation history of other people instead of their own. Possibly more concerning was the news that payment-related information from ChatGPT-Plus subscribers might have leaked as well.

OpenAI published a report on the incident and corrected the bug that caused the problem. That doesn’t mean new issues won’t arise in the future. With any online service, there is a risk of accidental leaks like this and cybersecurity breaches from the growing army of hackers.

OpenAI’s privacy policy

According to OpenAI’s privacy policy, your contact details, transaction history, network activity, content, location, and login credentials might be shared with affiliates, vendors and service providers, law enforcement, and parties involved in transactions.

In some cases, this is unavoidable. OpenAI might use third-party payment processors, so this is to be expected. The company must comply with any legal obligations, and some of this data might be used for research.

Even when it’s easy to justify collecting data, the potential for misuse and leaks is a valid safety concern. OpenAI’s ChatGPT FAQ suggests you don’t share sensitive information and warns that prompts can’t be deleted.

ChatGPT as a hacking tool

A laptop opened to the ChatGPT website.
Shutterstock

On the subject of cybersecurity, some experts are concerned about ChatGPT’s potential use as a hacking tool. It’s clear that the advanced chatbot can help anyone write a very official-sounding document, and ChatGPT could be called upon to construct a convincing email phishing scam.

The AI is also a good teacher, making it easy to learn new skills with ChatGPT, possibly even dangerous programming skills and information about network infrastructure. The combination of ChatGPT and dark web forums could lead to numerous and novel attacks to challenge the already stretched resources of cybersecurity researchers.

For example, someone on X (formerly Twitter) posted an example of asking GPT-4 to write instructions for how to hack a computer, and it provided some terrifying detail.

Well, that was fast…

I just helped create the first jailbreak for ChatGPT-4 that gets around the content filters every time

credit to @vaibhavk97 for the idea, I just generalized it to make it work on ChatGPT

here's GPT-4 writing instructions on how to hack someone's computer pic.twitter.com/EC2ce4HRBH

— Alex (@alexalbert__) March 16, 2023

ChatGPT can write code based on plain English requests, allowing anyone to generate a program. And with the ChatGPT plug-ins feature, the AI can even run self-generated code.

OpenAI sandboxed this capability to prevent dangerous uses, but we’ve already seen an example of OpenAI’s GPT-3 API being hacked. OpenAI must be very careful with security as the plug-in feature and internet access is rolled out to more people.

ChatGPT and job safety

ChatGPT has been worrying teachers since it makes plagiarism incredibly easy. OpenAI trained its chatbot on the kinds of information that students need to know to write essays as proof that they’ve learned a subject.

While that’s not a safety concern, teachers also need to be aware that ChatGPT can educate students on a broad range of topics, providing one-on-one attention and instant answers to questions. In the future, AI might be called upon to help teach students in overcrowded classrooms or to assist with tutoring.

For authors, ChatGPT could seem threatening. In a matter of seconds, it can generate thousands of words. The same task requires hours of work for a person, even a professional writer.

An OpenAI graphic for ChatGPT-4.
OpenAI

At the moment, there are still enough errors to make it more useful as a research or writing tool than a replacement for authors. If accuracy issues are resolved, AI could begin taking jobs.

ChatGPT has a vast number of uses, and more are being discovered every day. Beyond communication and learning, ChatGPT can even analyze a photo of a hand-drawn app and write a program to create it, as shown in OpenAI’s demonstration of the new capabilities of GPT-4.

ChatGPT scams

It isn’t OpenAI’s fault, but a side effect of any exciting new technology is a surge in scams that promise greater access or new features. Since access to ChatGPT is still limited and sometimes slow, there’s a strong demand for more ChatGPT goodness.

Each new update brings expanded capabilities, some of which require a membership and have limited availability. ChatGPT fervor provides fertile ground for scams. Offers of free, unlimited access at the fastest speed and with the best new features are hard to pass up.

Unfortunately, the old saying still holds — if it sounds too good to be true, it probably is. Be wary of ChatGPT offers that come via email or social media. It’s best to check trusted media outlets for news or go directly to OpenAI to confirm any invitations or deals that sound iffy.

ChatGPT is both powerful and terrifying. As one of the first examples of a publicly available AI with good language skills, its challenges and successes should serve as a wake-up call for everyone. It’s important to use caution with new AI technology. It’s too easy to get caught up in the excitement and forget that you’re dealing with an online service that can be hacked or misused.

Does ChatGPT collect user data?

It most certainly does. ChatGPT has access to many different types of user info, including IP and email addresses, location, what devices you’re using, whether or not you’re on public or private Wi-Fi, and even your chat history. 

Various ChatGPT custom GPTs.
OpenAI

Can anyone access my ChatGPT history?

According to OpenAI, anyone with access to a shared link will also have access to the conversation within. This includes an entire snapshot of the dialogue up to the point of it getting shared. Users will also see any responses made to the conversation. 

As there is currently no way to set the expiration date for a shared link, users can delete and invalidate shared links. And if the creator of the shared link deletes the conversation within the link, the shared link will also be deleted.

How many people are using ChatGPT?

As of March 2024, ChatGPT has over 180 million users. Several sources have indicated that most of these subscribers are using the free version of the chatbot. 

Slow and steady wins the race

OpenAI is aware of the need to proceed more slowly as ChatGPT gains more skills and internet access. Moving too quickly could lead to backlash and potential regulatory burdens.

Editors' Recommendations

Michael Bizzaco
Michael Bizzaco has been writing about and working with consumer tech for well over a decade, writing about everything from…
Researchers just unlocked ChatGPT
ChatGPT versus Google on smartphones.

Researchers have discovered that it is possible to bypass the mechanism engrained in AI chatbots to make them able to respond to queries on banned or sensitive topics by using a different AI chatbot as a part of the training process.

A computer scientists team from Nanyang Technological University (NTU) of Singapore is unofficially calling the method a "jailbreak" but is more officially a "Masterkey" process. This system uses chatbots, including ChatGPT, Google Bard, and Microsoft Bing Chat, against one another in a two-part training method that allows two chatbots to learn each other's models and divert any commands against banned topics.

Read more
OpenAI and Microsoft sued by NY Times for copyright infringement
A phone with the OpenAI logo in front of a large Microsoft logo.

The New York Times has become the first major media organization to take on AI firms in the courts, accusing OpenAI and its backer, Microsoft, of infringing its copyright by using its content to train AI-powered products such as OpenAI's ChatGPT.

In a lawsuit filed in Federal District Court in Manhattan, the media giant claims that “millions” of its copyrighted articles were used to train its AI technologies, enabling it to compete with the New York Times as a content provider.

Read more
Here’s why people are claiming GPT-4 just got way better
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

It appears that OpenAI is busy playing cleanup with its GPT language models after accusations that GPT-4 has been getting "lazy," "dumb," and has been experiencing errors outside of the norm for the ChatGPT chatbot circulated social media in late November.

Some are even speculating that GPT-4.5 has secretly been rolled out to some users, based on some responses from ChatGPT itself. Regardless of whether or not that's true, there's definitely been some positive internal changes over the past behind GPT-4.
More GPUs, better performance?
Posts started rolling in as early as last Thursday that noticed the improvement in GPT-4's performance. Wharton Professor Ethan Mollick, who previously commented on the sharp downturn in GPT-4 performance in November, has also noted a revitalization in the model, without seeing any proof of a switch to GPT-4.5 for himself. Consistently using a code interpreter to fix his code, he described the change as "night and day, for both speed and answer quality" after experiencing ChatGPT-4 being "unreliable and a little dull for weeks."

Read more