Never forget that anything you share with ChatGPT is retained and used to further train the model. Samsung employees have learned this the hard way after accidentally leaking top-secret Samsung data.
Samsung employees accidentally shared confidential information while using ChatGPT for help at work. Samsung’s semiconductor division has allowed engineers to use ChatGPT to check source code.
But The Economist Korea Reported Three separate instances of Samsung employees unintentionally leaking sensitive information to ChatGPT. In one instance, an employee pasted secret source code into a chat to test for errors. Another employee shared the code with ChatGPT and “requested to improve the code.” Third, shared a meeting recording to turn into notes for a presentation. This information is now out in the wild to feed ChatGPT.
The leak is a real-world example of hypothetical scenarios that privacy experts have been dealing with. concerned about. Other scenarios include sharing confidential legal documents or medical information for the purpose of summarizing or analyzing long texts, which can be used to refine the model. Experts have warned that this could violate GDPR compliance, which is why Italy recently banned ChatGPT.
Samsung has taken immediate action by limiting the ability to upload chat GPT to 1024 bytes per person, and is investigating those involved in the leak. It is also considering building its own in-house AI chatbot to prevent future embarrassing mishaps. But it is unlikely that Samsung will recall any of its leaked data. Chat GPT Data Policy says it uses the data to train its models unless you request to opt out. In ChatGPT’s usage guide, this clearly Warns users. Not sharing sensitive information in conversations.
Consider this a cautionary tale to remember the next time you turn to ChatGPT for help. Samsung certainly will.
Credit : mashable.com