Everybody seems to be super-excited about the launch of generative AI. Well, almost everybody. Samsung has reportedly instituted a company-wide ban on the use of the technology thanks to an internal leak as a result of ChatGPT.
Technically, the problem isn’t really the generative AI. It’s more the fact that it can’t keep a secret. Samsung engineers in South Korea uploaded sensitive code to ChatGPT which was later leaked. This sort of poor operational security isn’t going to work for the company.
Safety at Samsung
The door is open for the likes of Google Bard and ChatGPT to make a reappearance in the company’s workflows. The company said, in a memo to employees, that “HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency. However, until these measures are prepared, we are temporarily restricting the use of generative AI.”
“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment.”
Read More: Samsung considers changing default search engine to Bing
The security of information shared with ChatGPT has been a concern for some time. Italy banned the technology entirely until OpenAI promised to show users how to opt out of data sharing. Other companies that deal with sensitive data have similarly kept generative AI tech at arm’s length. Samsung still allows staff to use ChatGPT and similar tech in a personal capacity but it will probably prefer its own in-development competitor AI once it comes to market.
Source: Bloomberg