Lawyers are turning to artificial intelligence to write briefs, analyze contracts and automate mundane tasks. But it comes with risks.
To the surprise of no one, these things that probabilistically generate strings of text make shit up. Sure it's biased towards previously written strings of words which are hopefully true but no reason something "correctish" can't be shit out.
I'm not a lawyer but I can see a good way for lawyers to use ChatGPT: tell it to list laws that are potentially related to the case, then manually check those laws to see if they apply. This would work nicely in countries with Roman law; and perhaps in countries with tribal law too (the article is from USA), as long as the model is fed with older cases for precedent.
And... really, that's the best use for those bots IMO - asking it to sort, filter and search information from messy and large systems. Letting it write things for you, like those two lawyers did, is worse than laziness: it stinks stupidity.
It's also immoral. The lawyer is a human being, thus someone who can be held responsible for one's actions; ChatGPT is not and, as such, it should not be in charge of decisions that affect human lives.