Yes, sort of. But read on.
On November 3rd, the ABA Journal reported on a story which I thought was somewhat overblown in other sources reporting the same story.
During the last week of October, legal technology company CaseCrunch held an AI-versus-lawyer competition, and the machines certainly won. The faceoff (poor word choice) involved over 100 attorneys from firms like DLA Piper and Allen & Overy competing against CaseCruncher Alpha to predict outcomes of nearly 800 real, historic insurance misselling claims. The objective was to correctly determine if the claim would succeed or not.
CaseCrunch's website said the software predicted outcomes with almost 87 percent accuracy, while the lawyers were 62 percent correct. "The main reason for the large winning margin seems to be that the network had a better grasp of the importance of nonlegal factors than lawyers," read a statement on the website.
A little digging by another source revealed ancillary facts. Legal IT Insider quoted Ralph Cox, a patent litigation partner at Clyde & Co. in London and an attendee of the event,: "I keep an open mind. However, it struck me that the computer was given all the database information and therefore had an unfair advantage. Lawyers who took part were from outside this area of expertise without any real experience of [Payment Protection Insurance]."
In sum, the lawyers recruited for this competition were not subject matter experts in the topic tested in the competition. Ludwig Bull, CaseCrunch's scientific director, said to the same publication that the subject matter imbalance was hard to reconcile.
"We struggled to find an area because lawyers specialise in so many niche areas. So we had to find something that was relatively easily intelligible to most lawyers and where they could also understand the underlying principles relatively quickly. It was as fair as we could make it," said Bull.
The competition was covered by BBC and elsewhere as though it were an epic man-versus-machine matchup, like the legal world's version of Kasparov versus Deep Blue, a pair of six-game chess matches in the 1990s between the world's reigning chess champion, Garry Kasparov, and IBM's supercomputer. While Kasparov won the first match, Deep Blue won the second. At the time, it was seen as a benchmark in the development of artificial intelligence.
Bull seemed to modify the story when he told The American Lawyer: "These results do not mean that machines are generally better at predicting outcomes than human lawyers. These results show that if the question is defined precisely, machines are able to compete with and sometimes outperform human lawyers."
From my foxhole, this was not precisely a fair matchup – and I am loathe to draw any significant conclusions from it, especially when the press seems eager to over-hype it and slow to question the methodologies. Let CaseCruncher Alpha take on real subject matter legal experts to prove its expertise on a level playing field.
E-mail: email@example.com Phone: 703-359-0700
Digital Forensics/Information Security/Information Technology