Skip to main content

Helen Dobson Banner Image

Helen Dobson

Commercial


Whether interacting with chatbots, leaning on Google Translate or interrogating models such as ChatGPT, it’s impossible to deny that the use of generative AI (GenAI) is on the rise. The technology has much to offer and businesses are understandably keen to leverage its numerous applications and significant potential to drive efficiency in their operations. While GenAI is the big new for content and media creation, the technology is still relatively in its infancy, and its use is not without limitations and risks.

bigstock Technology And People Concept  470742945

The risks of generative AI

ChatGPT is a ‘large language model’ or LLM which has been trained on a massive dataset to generate humanlike text. Its function is to predict a plausible next word, to create a coherent sounding, linguistically correct response to a question posed. Accuracy and appropriateness are not within the foundation of its coding parameters, however. It is known to ‘hallucinate’, at times with devastating effect. Reports include cases where it has invented fictitious sexual harassment scandals and falsely claimed that an Australian mayor had been imprisoned for bribery. It can also demonstrate bias and toxicity. Currency is also an issue, determined by the knowledge sets integrated into the model. The open access version of ChatGPT, for example, is only trained up to the end of September 2021.

In machine translation (MT), accuracy is, similarly, a concern. In January 2023, an All Parties Parliamentary Group for Languages highlighted that machine translation suffers from cultural blind spots and the output always contains more errors than human translation. With the highest quality scores of about 90% accuracy, the impact of 1 in every 10 words being wrong, or 10 in every 100 pages misleading cannot be underestimated. MT’s use in translating refugee testimony has created discrepancies significant enough for US courts to reject asylum claims. Fake translation of real news is a further pitfall. GCHQ, responsible for advising the government on languages, has highlighted the need in society for education in digital literacy: not to put people off MT but to encourage its responsible use.

The risks of using GenAI further impact data security and privacy, confidentiality, copyright infringement, and competition. ChatGPT is currently subject to several data privacy claims and investigations in multiple jurisdictions.  In the US, Getty Images has claimed that StableAI’s use of its content to ‘train’ Stable Diffusion (text-to-image model released in 2022) infringed its rights, leading to questions in terms of both the transient input use of Getty’s content to train the AI and the output use of generated content.

The legislative and regulatory framework has some catching up to do to meet the pace of technological change, and on the question of ownership of IP rights in the output works, any international consensus remains to be to found.  In the US, copyrighting of an AI-generated work is firmly excluded. Meanwhile, the UK’s copyright legislation allows for the possibility of human ownership of computer-generated works, but in whom the rights vest (as between the programme developer and user) remains untested.

 

Leveraging the benefits – proceed, but with caution

With the direction of technological travel, GenAI will inescapably affect all businesses, and there are certainly benefits to be derived from its informed and considered use.

Many aspects of the legal territory are emerging and untested, however. With ultimate responsibility for its use falling on users, early adopter businesses would do well to implement certain core organisational structures to safeguard against the practical and legal risks:

  • Governance: Implement organisational policies and record-keeping processes can help protect your business against misuse of GenAI, safeguard against associated risks to confidentiality, data security and preservation of IP rights.
  • Training: Educate your workforce on the risks of GenAI, and particularly the nature and level of content they submit and credence they give to the output.

  • Selective use: GenAI tools are better suited to applications where accuracy can be easily verified. ChatGPT may be able to spin some convincing marketing copy, but is ill-suited to producing technical guidance documents or legal contracts. MT may be useful for understanding the gist of a text or creating a rough first draft, but is not ready for generalised use as your resident business translator.

  • Human intervention: However your business uses GenAI, inputs should be validated before use against legal and contractual confidentiality and data privacy security obligations (and ensure that the use of GenAI will not breach any law or contract to which you are subject), while outputs will need human checking to confirm accuracy and currency, and to check for bias, toxicity and cultural appropriateness.

  • Legal guidance: Seek early input if you have concerns or queries about the legal, regulatory or contractual risks and implications of using GenAI in your business.

 

If you have queries regarding the potential legal issues that may arise from the use of AI in your business, contact our Commercial & Technology team on [email protected].


Consistent with our policy when giving comment and advice on a non-specific basis, we cannot assume legal responsibility for the accuracy of any particular statement. In the case of specific problems we recommend that professional advice be sought.

Get in touch

If you have any questions relating to this article or have any legal disputes you would like to discuss, please contact the Commercial & Technology team on

[email protected]
shutterstock 531975229 (1)

Stay ahead with the latest from Boyes Turner

Sign up to receive the latest news on areas of interest to you. We can tailor the information we send to you.

Sign up to our newsletter
shutterstock 531975229 (1)