Employers warned of risks posed by chatbots like ChatGPT in the workplace

Posted on

To prevent data loss, employers should keep a close eye on the use of chatbots such as ChatGPT, especially in instances where it is unauthorised, advises law firm ENS.

In an article published on the law firm’s website, ENS warns that multiple employers, including global tech giants, have been the subject of “conversational AI leaks” these past months.

According to the law firm, Conversational AI leaks is a phrase used to describe a loss of data where a chatbot is involved. These leaks involve incidents where sensitive data/information, which is fed into chatbots like ChatGPT, is unintentionally exposed.

The authors of the article explain that when information is disclosed to chatbots, the information is sent to a third-party server and is used to train the chatbot further.

“What that means, in simple terms, is that the information input into the chatbot may be used by the chatbot in the future generation of responses. This becomes particularly problematic when the chatbot has access to and is using confidential, sensitive or personal information that should not be publicly available.”

ENS says there have been numerous incidents where employees have unintentionally exposed sensitive employer information by using publicly available chatbots without employer awareness. Instances include using chatbots to identify and fix source code errors, optimize source code, and generate meeting notes from uploaded recordings.

IBM’s Data Breach Report 2023 states that between March 2022 and March 2023, the global average cost of data breaches reached an all-time high of USD4.45 million and for South Africa specifically, it exceeded ZAR50 million.

“The cost of conversational AI leaks can be crippling for an employer and as a result, employers should be front-footed in their approach to the use of chatbots in the workplace,” says ENS.

The law firm says there is no one-size-fits-all approach in AI risk management and the approach adopted will largely depend on the extent to which AI is incorporated into an employer’s operations.

However, it adds, that without proactive regulation, the danger exists that employees may resort to unauthorised AI tools, irrespective of the chosen approach. This may result in a scenario of “shadow IT”, leading to an unsanctioned IT environment existing in parallel to the employer’s approved IT infrastructure and systems.

“The problem with this is that there is no internal regulation, security or governance over the shadow IT which may expose the employer to security vulnerabilities, data leaks, intellectual property disclosure, and other issues. So, employers and employees should remain cautious of which generative AI tools they use, where they source their information from and what information is being shared in that process,” says ENS.

ENS provides the following tips for employers and employees to prevent conversational AI leaks from occurring.

For employers:

  • Take a proactive approach to regulating the use of generative AI in the workplace by procuring enterprise version licences, and/or implementing internal policies and procedures to regulate organisational use of generative AI.
  • Review your contracts with AI service providers to ensure that you adequately protect your intellectual property.
  • Ensure data security is your top priority and provide generative AI with information on a need-to-know basis.
  • Ensure you customise your personalised chatbots responsibly and ethically.
  • Train employees on how to use chatbots responsibly.
  • Monitor chatbots’ compliance with privacy regulations and data protection measures as well as against internal employer policies.
  • Implement and maintain internal employer policies to regulate the use of generative AI in the workplace, regulating, inter alia: authorised generative AI systems; acceptable use; prohibited activities, such as sharing of personal or confidential information; data protection; intellectual property; and liability and disciplinary procedures.

For employees:

  • Do not use AI tools that have not been approved by your employer for work purposes.
  • Do not share personal, proprietary, or confidential information with chatbots.
  • Do not upload any employer intellectual property (copyrighted material, such as data, documents, or source code).
  • Understand the legal, commercial and technical risks associated with the use of chatbots and also policies which may be implemented by the employer.
  • Confirm the accuracy of chatbot responses, particularly where the responses may influence critical decisions (in other words, ensure a human being vets the responses of the chatbot and applies their mind to the output).
  • Implement processes to monitor and prevent data bias and discriminatory outputs being generated by the chatbots.
  • Familiarise yourself with acceptable chatbot usage and emerging standards, guidelines, and frameworks on ethical and Responsible AI.
  • Report any security or privacy concerns when using chatbots.