The question is no longer “What can ChatGPT do?” It’s “What should I share with him?”
Internet users are generally aware of the risks of potential data breaches and the ways in which our personal information is used online. But ChatGPT’s seductive capabilities seem to have created a blind spot around the dangers we normally take precautions to avoid. OpenAI recently announced a new privacy feature. which allows ChatGPT users to disable chat history, preventing conversations from being used to refine and improve the model.
“This is a step in the right direction,” said Nader Heinen, Gartner’s VP of privacy research with two decades of experience in corporate cybersecurity and data protection. “But the main problem with privacy and AI is that you can’t do much in terms of retroactive governance once the model is built.”
Think of ChatGPT as a friendly stranger sitting behind you on the bus recording you with a camera phone, Heineen says. “They sound so kind, they seem like good people. Then will you go and talk to him the same? Because that’s what he is.” He continued, “It’s well-intentioned, but if it hurts you — it’s like a sociopath, they won’t think twice about it.”
Even OpenAI CEO Sam Altman has acknowledged the risks of relying on ChatGPT. “It’s a mistake to rely on it for anything important right now. We have a lot of work to do on robustness and truth,” he said. Tweeted In December 2022.
The tweet may have been deleted.
Basically, treat ChatGPT prompts as you would anything else you publish online. “The best assumption is that anyone in the world can read anything you put on the Internet — emails, social media, blogs, LLMs — whenever you post something,” said Fletcher Jones professor Gary Smith. Don’t do what you don’t want anyone else to read.” of economics at Pomona College and the author Distrust: Big Data, Data Torture, and the Assault on Science. ChatGPT can be used as an alternative to Google Search or Wikipedia, as long as it is fact-checked, he said. But this should not be relied upon too much.
Most importantly, there are still risks, made even more uncertain by the allure of ChatGPT. Whether you’re using ChatGPT in your personal life or to increase work productivity, consider it your friendly reminder of what you share with ChatGPT.
Understand the risks of using ChatGPT.
First, let’s look at what OpenAI tells users about how they use their data. Not everyone has the same privacy preferences, but it’s important to know the fine print the next time you open ChatGPT.
1. Hackers can break into the most popular app.
First and foremost, there is a possibility that someone outside of OpenAI could hack and steal your data. Using a third-party service always carries an inherent risk of data exposure to bugs and hackers, and ChatGPT is no exception. In March 2023, a ChatGPT bug was discovered that included headers, the first message of a new conversation, and payment information from ChatGPT Plus users.
“All this information that you’re putting into it is extremely problematic, because there’s a good chance that it could be vulnerable to machine learning attacks. That’s number one,” Heinen said. “Number two, it’s probably sitting somewhere in the log in clear text. Whether or not anyone’s looking at it, I don’t know, and neither do you. That’s the problem.”
2. Your conversation is stored on a server somewhere.
Although unlikely, some OpenAI employees have access to user content. Chat on GPT Frequently asked questions page, OpenAI says user content is stored on its systems and other “systems of trusted service providers in the United States.” So when OpenAI removes identifiable personal information, before it is “de-identified,” it remains in raw form on its servers. Some authorized OpenAI personnel have access to user content for four distinct reasons: one is to “fine-tune” their models, unless users opt out;
3. Your conversation is used to train the model (unless you opt out).
We will opt out later, but until you do, your conversations will be used to train ChatGPT. According to its data usage policy, which is scattered across several different articles on its site, OpenAI. Says that, “We may use the data you provide to improve our models.” On Another page, OpenAI says it “may collect or de-identify personal information and use aggregated information to analyze the effectiveness of our services.” That is, theoretically the public can become aware of anything the model “learns.”
Previously, users could only opt out of having their data shared with the model via the Google Form linked in the FAQs page. Now, OpenAI has introduced a more explicit way to disable data sharing in the form of a toggle setting in your ChatGPT account. But even with this new “incognito mode,” conversations are stored on OpenAI’s servers for 30 days. However, the company has little say in how it protects your data.
4. Your Data Will not The company says it will be sold to third parties.
OpenAI says it doesn’t share user data with third parties for marketing or advertising purposes, so that’s one less thing you need to worry about. But it does share user data with vendors and service providers for site maintenance and operation.
What can happen if you use ChatGPT at work?
CheatGPT and Generative AI tools have been touted as the ultimate productivity hack. ChatGPT can draft articles, emails, social media posts, and summaries of longer pieces of text. “There’s not an example you can think of that hasn’t been done,” Heinen said.
But when Samsung employees used ChatGPT to check their code, they inadvertently exposed trade secrets. The electronics company has since banned the use of ChatGPT and threatened employees with disciplinary action if they failed to comply with the new restrictions. Financial institutions like JP Morgan, Bank of AmericaAnd Citigroup The use of ChatGPT has also been banned or restricted due to strict financial regulations regarding third party messaging. Apple has also banned employees from using chatbots.
The temptation to reduce mundane tasks to seconds obscures the fact that consumers are essentially publishing this information online. “You’re thinking of it like you think of a calculator, you’re thinking of it like Excel,” he said. “You’re not thinking that this information is going into the cloud and it’s going to be there forever, either in a log, or in a model.”
So if you want to use ChatGPT at work to break down concepts you don’t understand, write copy, or analyze publicly available data, and there’s no rule against it, proceed with caution. grow up But be very careful before you, for example, ask to review the code on the top-secret missile guidance system you’re working on, or share it with your boss with a corporate spy at a competing company. Write a summary of the meeting. It could cost you your job, or worse.
What can happen if you use ChatGPT as a therapist?
Oh Survey Research conducted by health tech company Tebra revealed that one in four Americans are more likely to talk to an AI chatbot than attend therapy. Examples of people using ChatGPT as a have already popped up. Form of therapyor searching Help for substance abuse. These examples were shared as interesting use cases of how ChatGPT can be a helpful, non-judgmental, and anonymous conversation partner. But your deepest, darkest entries are stored somewhere in the server.
The tweet may have been deleted.
Heinen said people think their chat GPT sessions are like a “walled garden”. “Finally, when I log out, everything in it [session] Flushes the toilet, and that’s the end of the conversation. But it’s not.”
If you’re an individual on the Internet, your personal data is already everywhere. But ChatGPT is not a medium of communication where you may feel compelled to express intimate and personal thoughts. “LLMs are an illusion — a powerful illusion, but still an illusion reminiscent of the ELIZA computer program created by Joseph Weisenbaum in the 1960s,” Smith said.
Smith is referring to the “Eliza effect,” or the human tendency to anthropomorphize things that are inanimate. “Although users knew they were interacting with a computer program, many believed that the program had human-like intelligence and emotions and was able to share their deepest feelings and most closely held secrets. were happy.”
So while looking at how OpenAI secures your conversations, try not to delude yourself that it’s a mental health wizard, and blur your inner thoughts, until you can understand your inner thoughts. Don’t be ready to broadcast to the world.
How to protect your data on ChatGPT
There is a way to be incognito when using ChatGPT. This means your conversations are still saved for 30 days, but they won’t be used to train the model. By navigating to your account name, you can open Settings, then click “Data Controls.” From here you can toggle “Chat History and Training”. You can also clear past conversations by clicking “General” and then “Clear All Chats.”
To disable your chat history, go to the Settings page.
Credit: OpenAI
Credit : mashable.com