Monday, March 27, 2023

Legalweek: Is the hype around ChatGPT just a bubble?

Anyone who had the opportunity to attend Legalweek last week in New York City might almost have gotten that impression.

That is not to say that the importance of artificial intelligence for the legal industry has been denied in general. Its relevance to the vast field of discovery, for example, is well seen. The undisputed capabilities of the latest language models when it comes to summarizing documents are also presented as highly forward-looking. Only when it comes to writing legal texts, even as a first draft, have I perceived icy rejection. Why is that?

Usually, after all, it is not the "one" cause that is decisive when forming an opinion. The most common argument I heard was hallucinations. It may be that this topic is even more prominent when a software suddenly invents precedents that don't even exist. The argument that in fact no time is saved if every concept for a brief has to be checked in detail - just like today - also sounds quite factual. The fact that the data status of ChatGPT-4 is September 2021 certainly does not build confidence, even if interfaces for updating were announced recently. And in the end, the sentence "Lawyers hate change", delivered in front of a large auditorium, remained unchallenged there.

So it will be a mix of several motives if no hype about ChatGPT & Co. could be detected with regard to lawyer’s "writing".

Does this mean that the future of the legal industry will not be (radically) changed by Large Language Models after all? I don't think it does, but change takes time on the part of those affected - and unimpeachable quality on the provider side. And it needs participation: without training a model with its own content (data), it will not be possible to ensure the necessary quality, and that costs time and money.

Wednesday, March 15, 2023

GPT in general and the legal industry in particular: Report from an interesting roundtable.

Yesterday at noon (local time), a roundtable on "Beyond ChatGPT: What Generative AI Actually Means for Law Firms, In-House, Legal Tech and More" was held in the US. Legaltech News Editor-in-Chief Stephanie Wilkins welcomed an illustrious crowd of guests.

Without a doubt, the surprise hit came from Pablo Arredondo, Co-Founder and CIO of Casetext. Not only did he announce the simultaneous live launch of GPT-4, but also of "CoCounsel", an application from Casetext that already relies on OpenAI's brand new language model. He raved about the new GPT version, predicting that it would be simply impossible for law firms not to use the new technology because it was so much better than humans (!) To be fair, he added that he had analyzed his own litigation files with CoCounsel, which alerted him to highly personal errors and gaps in reasoning.

Casey Flaherty, Co-Founder and Chief Strategy Officer of LexFusion, initially tried to put things into perspective: GPT is ultimately just one of many available language models that has the hype on its side. However, he also emphasized that the threshold to market maturity had been overcome; if ChatGPT was the trailer, the movies would now follow. His advice: "This is happening now; it will be done by you or to you." 

It was also exciting to hear that Flaherty, unlike the other panelists, very much expects technology to partially replace lawyers.

Darth Vaughn, Litigation Counsel and Legal Innovation and Technology Operations Manager at Ford Motor Company, took a different view: he, too, is excited about new technology, but stressed that at all times domain knowledge must remain available in the organization to control the new tools.

A large indirect impact on advocacy was seen by Jae Um, Founder and Executive Director at Six Parsecs. She, too, sees LargeLanguageModels as indispensable in law firm practice. However, she emphasizes that the use of ultra-fast high technology means the end of traditional billing based on time and material. The law firms are urgently required to try out alternative remuneration models, whereby it will probably come down to trial&error.

In summary, a common ground can be found in the fact that from the point of view of US commercial law firms and providers, the use of state-of-the-art technology is indispensable. Which language models will ultimately prevail cannot yet be judged. I have not heard of any reservations about training models of foreign providers with own law firm content (=client data). And whether this constraint is transferable 1:1 to the continental European market with its completely different legal system could not be addressed in this intra-American format.



Monday, February 27, 2023

ChatGPT and Academia - a tense relationship

After the first two days of the IRI§23 conference were primarily about practical usability, as far as the respective streams were related to ChatGPT, the focus on Saturday was on academic topics.

The first was the question of the protectability of language models and their results under copyright law. The answer to the latter is probably complex enough, there would have been no real need for the appeal for a general abolition of copyright in the field of science.

Highly interesting, however, were the reflections on the subject of teaching and on the subject of citation. With regard to teaching, the thesis was put forward without contradiction that the age of "academic homework" is finally over. No teacher can be expected to take responsibility for assignments that are actually written by highly developed chatbots.

And the scientific citation, is it also endangered? - I mean, yes certainly. How should a citation be composed if the result is not repeatable? After all, here there are massive differences in the new technologies compared to the common practice of citing web pages with an exact date of retrieval. But I also wonder if the question is really that prominent. Will there be legions of scientists using ChatGPT to write their papers? If so, then only in marginal areas, and there a copy&paste from Wikipedia should be sufficient. With appropriate reference.

Conclusion: Some questions can be solved, others only in the more distant future. It may be discussed further on academic ground.

Friday, February 24, 2023

ChatGPT - Revolution or bubble?



At "IRI§23", the International Legal Informatics Symposium in Salzburg, ChatGPT was the star. The deserving organizers had not quite expected this, but the number of visitors to the corresponding streams was high.

But what exactly is so special about this new technology?
  • The ability to correctly interp'ret human language, say some.
  • The ability to even write texts in a flawless form, the others.
  • ChatGPT revolutionizes database searching, according to some; or not.
  • It is a system that should be used only by experts because of its error-proneness - or, according to another opinion, only by non-professionals.
And so it went on, and every opinion was well-founded! There was even disagreement on the question of whether ChatGPT would tsunami-like overrun the legal world or whether it was just a risky bubble.

At least concerning the last question I have a clear opinion: ChatGPT (and the epigones) are too big to fail. The billion dollar investment leaves no other development. And after all, never in history has a new technology found so many users so quickly.

Conversational AI is making history, but no one knows exactly what form it will take.

Tuesday, June 21, 2022

Allowed to be wrong sometimes


For years now, the question has been discussed again and again in relevant formats whether lawyers will need programming skills in the future. I have always denied this - to acquire and maintain good legal knowledge is enough to fill an evening, there is no room for another discipline (apart from exceptions).


Today I say: that was too short-sighted. Just as marketing professionals need to have a deep understanding of the processes around digital advertising, lawyers of the future need to have a good understanding of the relevant digital technologies.


They don't have to be the ones writing lines of code for e-discovery suites. But they do need to be able to assess what to expect from the various technologies that are usually grouped under the umbrella term "artificial intelligence." Lawyers and notaries, corporate lawyers, judges, prosecutors and administrative lawyers - all of them will, in the course of their very own work, be facing the question of the use of technology for certain process or procedural steps and will have to assess for themselves what they can expect from it and what the risks are. By this I don't mean the eternal data protection issues, but issues such as bias, training sets, confidentiality, and most especially error calculus.


This also creates a whole new set of challenges for legal academia, but even more so for continuing legal education. For a long time, lawyers have seen time constraints as their central problem. The tension will only be resolved if ... but that is another story.


Wednesday, April 6, 2022

Catch-up

Recently, I was honored to be jury member at the Global Legal Hackathon. Given the dimension, there are of course quite a few jury teams, but "mine" is worth a second look. And that's how it came up::

Even a glance shows a highly interesting mix of jury members: geographically, the spectrum of members ranges from South Africa to the Baltics, from Romania to the British Isles. Every single one of them has an independent, individual career - practicing lawyers are represented as well as professional startup consultants, for example. But one thing gives me pause for thought:


Is it pure coincidence that the (originally) German-speaking members of our jury team are all male, but all the others are female??? Or is there perhaps a massive #backlog in #diversity in Germany/Austria?


Postscript: this thought is not a hidden critique of the configuration of the jury team - rather, it is a direct consequence of an intense discussion I had with my wife on the topic yesterday; as a doctor in hospital, she has long been used to diverse teams.


Thursday, March 17, 2022

The darker side of digitization

Last week, New York City hosted this year's LegalWeek - an annual, multi-day congress on the digitization of the legal industry. After two years of absence due to the pandemic, many sessions revolved around the question of what impact COVID-19 has had on digital development. This question is as difficult to answer after the pandemic as the prophecies in this regard were before. I would like to share two interesting aspects from the event reports here.

First, in his article published on law.com, Rhy Dipshan argues that the demands on judges are increasing as the use of technology in the courts increases. The panel was not at all concerned with fancy understanding of the mechanisms of artificial intelligence. Rather, the focus is on ostensibly trivial mechanisms such as the questioning of witnesses via video call (zoom, etc.). Judges should pay more attention to the setting and the possibilities it offers to parties and party representatives.


Concern was caused by the thesis that the  digitization of the judiciary does not actually improve access to justice, but rather restricts it. Often, it is a matter of simple things like access to basic technology. There was mention of a family court that had entered into a cooperation with a local church, where litigants were given access to the Internet in digital kiosks in the first place. This would be a surprising symbiosis of state and church, which is worth reflecting on further - which brings us back to Lent.


Legalweek: Is the hype around ChatGPT just a bubble?

Anyone who had the opportunity to attend Legalweek last week in New York City might almost have gotten that impression. That is not to say...