(4 min read)
This article provides practical tips when using ChatGPT for businesses. It highlights restrictions in OpenAI's terms of use, including for legal advice and in the context of disputes. ChatGPT is still being developed and so the legal terms exclude warranties and independent verification of output is advised. Disclosure of AI-generated output is required, even for derivative works, to avoid breach. Recommendations include updated guidance on legal use, independent verification and proper citation. The article also flags but does not cover broader contractual, IP, regulatory, and ethical issues.
We've looked at some of the risks and issues to be aware to when using Chat GPT and provided some tips for your business to get ahead of the risks.
An important point before we continue: we will refer to the Terms which are the ChatGPT terms of use, provided by OpenAI, including usage policies, available here, which can be updated by OpenAI at any time. The Terms will apply to your use of ChatGPT, contain important conditions and restrictions which may, by extension, already apply to data held by, or being used by, your organisation. So it is important to understand how ChatGPT is being used and what for.
Issue 1: (In)Accuracy
The ChatGPT interface will openly tell you that it may not be accurate, but its Terms go significantly further. ChatGPT services are provided 'AS IS' and no warranties as to quality, use or accuracy of the Content are provided.
OpenAI makes clear in the Terms:
"Artificial intelligence and machine learning are rapidly evolving fields of study. We are constantly working to improve our Services to make them more accurate, reliable, safe and beneficial. Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output."
Our Tip: Provide guidance to your organisation concerning the potential inaccuracy of ChatGPT and the need to independently verify any responses provided including financial and market information and be wary about using it where your business or customers may suffer any loss including loss of confidentiality of data. Complete a spot-check of data your organisation is using from ChatGPT and ensure it has been independently verified.
Issue 2: Disclosing the use of AI usage
While ChatGPT doesn't seek to be specifically referenced – OpenAI requires users to disclose where information is an output from artificial intelligence and users mustn't suggest that output from ChatGPT was human-generated. We're yet to see if and how ChatGPT would be able to seek to enforce this requirement (and given the dispute resolution clause within the Terms exclusively refers any dispute to mandatory confidential arbitration we are unlikely to have visibility of this) but it is a condition of use to be mindful of.
Ensuring that Chat GPT is cited, not only on first publication of ChatGPT created information but also derivative works is going to be difficult to police. For example, an employee puts together a meeting pack with information from ChatGPT which does not have ChatGPT cited as a source. That meeting pack is then used by another team to create a publicly available document; i.e. an annual report, marketing material, training material or a post on social media. The organisation who publishes the material is now in breach of the Terms.
Our Tip: Provide guidance to your organisation concerning the need to cite the use of Generative AI as the source of output where applicable. Pay close attention to business development and marketing materials as well as blogs or social media accounts run independently by employees (for example, personal LinkedIn pages). Complete a spot-check of data your organisation is using from ChatGPT and ensure it's been cited as not being human-generated.
Issue 3: Legal Use
Whilst it might be tempting to pop a 'quick' legal question in ChatGPT, the Open AI's Usage Policies specifically restrict use of ChatGPT for the unauthorised practice of law (that is, without a qualified person reviewing the information) because 'OpenAI's models are not fine-tuned to provide legal advice'.
For businesses involved or potentially involved in a dispute, there are additional points of concern because scripts of chats with ChatGPT are likely to be deemed a document or class of document which may hold information relevant to a dispute. So, if employees have or continue to use ChatGPT to ask questions about the dispute and/or seek to summarise existing content which captures privileged material then it is possible that:
(i) the scripts would have to be disclosed in response to the action, including in response to pre-action applications; and
(ii) legal privilege has been waived in relation to any information that has been quoted and therefore the source documents containing the advice.
Our Tip: Update guidance documents and communicate to employees that ChatGPT should not be used for obtaining or discussing legal advice, discussing disputes or potential disputes and include the health warning that ChatGPT conversations may be disclosable documents. Consider disabling use of ChatGPT when disputes arise.
These are just some of the issues to be aware of the ChatGPT Terms and different offerings will have their own contractual points alongside data, IP, regulatory and ethical considerations.