Anyone who had the opportunity to attend Legalweek last week in New York City might almost have gotten that impression.
That is not
to say that the importance of artificial intelligence for the legal industry
has been denied in general. Its relevance to the vast field of discovery, for
example, is well seen. The undisputed capabilities of the latest language
models when it comes to summarizing documents are also presented as highly
forward-looking. Only when it comes to writing legal texts, even as a first draft,
have I perceived icy rejection. Why is that?
Usually,
after all, it is not the "one" cause that is decisive when forming an
opinion. The most common argument I heard was hallucinations. It may be that
this topic is even more prominent when a software suddenly invents precedents
that don't even exist. The argument that in fact no time is saved if every
concept for a brief has to be checked in detail - just like today - also sounds
quite factual. The fact that the data status of ChatGPT-4 is September 2021
certainly does not build confidence, even if interfaces for updating were
announced recently. And in the end, the sentence "Lawyers hate
change", delivered in front of a large auditorium, remained unchallenged
there.
So it will
be a mix of several motives if no hype about ChatGPT & Co. could be
detected with regard to lawyer’s "writing".
Does this
mean that the future of the legal industry will not be (radically) changed by
Large Language Models after all? I don't think it does, but change takes time
on the part of those affected - and unimpeachable quality on the provider side.
And it needs participation: without training a model with its own content
(data), it will not be possible to ensure the necessary quality, and that costs
time and money.
No comments:
Post a Comment