Internet law on Dark Blue blurred background.

The sudden increase in news coverage of generative artificial intelligence (AI) tools like ChatGPT and Midjourney has employees excited to discover how these accessible tools can make their jobs easier. Employers are concerned about the legal implications of using such tools, and they are exploring different approaches for their own AI usage policies.

However, blanket policies that try to account for the risks that can come from using any AI technology can be overly restrictive or so generic that they do not properly address the legal considerations for AI usage. The rules for using AI technology under an enterprise license are likely more permissive than those for consumer-facing AI tools, as the former will likely have broader confidentiality and indemnity protections than the latter. For this reason, employee guidance on AI usage should specifically identify the AI tools subject to particular guidance and specifically differentiate between AI tools that may be offered under both a personal and an enterprise license, like ChatGPT. Those interested in a more general conversation about current AI frameworks can see our article here.

Broadly speaking, AI tools can present both incoming and outgoing intellectual property issues. That is, usage can increase a company’s risk of infringing third-party intellectual property, and it can also increase a company’s risk that its own intellectual property will be improperly disclosed. In evaluating specific use cases, the following questions should be asked:

  • What types of information are employees disclosing in prompts to the generative AI tool? By design, generative AI improves upon its technology by incorporating user prompts in its training data. Some generative AI tools offer options that guarantee that user prompts will not be used to further train their AI model, but most generally available tools do not have those guarantees. Even with such protections, though, advice should be tailored depending on the level of sensitivity of the information (for example, to protect the company’s trade secrets) and the company’s obligations to third parties with respect to such information (for example, contractual obligations, data privacy laws and employment laws).
  • What desired use cases give rise to the risk of infringing third-party intellectual property? Repurposing AI-generated content in externally facing communications, like marketing and customer communications, or incorporating AI-generated source code into company codebases can present copyright infringement problems. However, analysis of copyright infringement issues is nuanced and is often fact specific. Advising employees on proper use is going to require an understanding of how they plan to use generative AI tools.
  • What low-risk uses can the company endorse? First, the default rule should be that no confidential or personal information is included in any prompts. From a practical perspective, some companies do not want to outright ban the use of generally available AI tools. Giving employees a list of permissible uses can be a good way to mitigate risk. Generative AI tools can be excellent brainstorming tools, and when the output is not incorporated into a final product, the risk is relatively low. For example, using AI tools for research has low risk as long as employees are aware of the limitations of AI technology (for example, that sometimes it’s wrong). This calls to mind the early days of the generally accessible Internet, when we were not allowed to cite the websites as sources in term papers. Of course, these rules will evolve as we better understand the technology.