- In three separate instances, Samsung Electronics employees in the company’s semiconductor business unit put sensitive corporate data into ChatGPT, according to a report from The Economist Korea.
- One Samsung employee entered faulty source code related to the Samsung Electronics facility measurement database download program to find a solution. Another employee entered program code for identifying defective equipment to get code optimization. The third employee converted a smartphone recording of a company meeting to a document file and entered it into ChatGPT to get meeting minutes, according to the report.
- After data was leaked, the company implemented an upload capacity of 1024 bytes per prompt, according to the report. Samsung Electronics did not respond to requests for comment.
ChatGPT’s web-based interface uses input data to train and improve the tool. In doing so, there are data privacy concerns if employees enter corporate data as part of their prompt.
The fear is that if employees input company data into ChatGPT, the AI could incorporate the information into its learning model. That information could become a part of its knowledge base and other users, when prompting ChatGPT, could see responses with proprietary data.
For enterprise users accessing ChatGPT through its API, OpenAI added parameters around training data. Enterprise data submitted through the API is not used for model training or other service improvements unless organizations decide to opt in. API users also have a default 30-day data retention policy with options for shorter retention windows.
While ChatGPT and its GPT-4 engine can be used to aid programmers, companies need to be clear about what can be included in prompts to avoid internal leaks of company data.
Nearly half of HR leaders said they are in the process of formulating guidance on employee usage of OpenAI’s ChatGPT, according to Gartner data.
Data privacy concerns have pushed businesses and governments to question the security of OpenAI’s models and ChatGPT. Italy imposed an immediate temporary limitation “on the processing of Italian users’ data by OpenAI,” citing a ChatGPT bug in March as one of the reasons behind the decision.
There was a significant issue in OpenAI’s ChatGPT last month due to a bug in an open-source library, causing a small percentage of users to see the titles of other active users’ conversation history.
OpenAI conducted an investigation into the issue that found the same bug, which is now patched, caused 1.2% of active ChatGPT Plus subscribers to have payment-related information leaked, including first and last name, email address, payment address, the last four digits of a credit card and credit card expiration date, OpenAI said in a March blog post.