The American Bar Association and Mississippi Bar Offer Flawed AI


Generative AI has created a serious professional-governance challenge for the legal profession. Lawyers must evaluate rapidly changing AI tools, manage uncertain professional risks, and account for a growing number of sanctions cases involving AI-generated fake citations. That challenge is compounded when the institutions lawyers typically rely on for ethical guidance provide advice that is incomplete, imprecise, or incorrect. Recent guidance from the American Bar Association and the Mississippi State Bar does exactly that.

Mississippi Ethics Opinion No. 267 is one example. The opinion permits lawyers to use generative AI if they protect client confidentiality, use the technology competently, verify AI outputs, bill reasonably, and obtain informed consent where appropriate. Those general principles are unobjectionable.

The problem is the opinion’s guidance on verification and reliance. The Mississippi opinion states that when a lawyer uses a generative AI tool, the lawyer should “trust but verify” the AI-generated outputs. But that is misguided.

Lawyers should not begin from a posture of trust when using tools known to generate false legal authorities, invent quotations, misstate holdings, and produce confident but inaccurate analysis. The ethical posture should be the opposite: assume the output is unverified until the lawyer independently confirms it.

One might argue it’s a distinction without a difference because the Mississippi opinion still advises lawyers to verify AI outputs. But a state bar should not proclaim that “trust” is the default posture toward a technology whose central professional risk is plausible falsehood.

And the deeper problem appears in the opinion’s discussion of how much verification is actually required. Mississippi Opinion No. 267 states: “a lawyer’s use of a GAI tool designed specifically for the practice of law or to perform a discrete legal task, such as generating ideas, may require less independent verification or review, particularly where a lawyer’s prior experience with the GAI tool provides a reasonable basis for relying on its results.”

That language is not unique to Mississippi. It comes verbatim from ABA Formal Opinion 512, the American Bar Association’s official AI ethics opinion issued in July 2024.

That guidance, adopted by both the ABA and the Mississippi Bar, is wrong.

Consider how it operates in practice. A lawyer uses a legal-specific generative AI tool to conduct legal research. The tool is designed for legal practice and for finding caselaw, a discrete legal task. The lawyer has used the product before, believes it performs well, and has never personally seen it fabricate a case.

Under the ABA and Mississippi opinions, prior experience may provide “a reasonable basis for relying on [the tool’s] results,” and therefore may require “less independent verification or review.” Coupled with the Mississippi Bar’s instruction to “trust but verify,” the practical message is clear: a lawyer may begin from trust and reduce review when using a familiar, legal-specific AI tool.

That is precisely the wrong guidance. If that lawyer trusts the output and applies a lowered standard of verification to AI-generated caselaw research, the lawyer may end up filing fake citations in court, joining a growing list of litigants and lawyers who have done the same.

Prior personal experience is a weak foundation for determining the appropriate level of review. Tools change constantly. Models are updated. Interfaces are redesigned. Guardrails are modified. A lawyer’s experience with a tool last week may not accurately predict the tool’s reliability today.

ABA Formal Opinion 512 recognizes this problem elsewhere. It expressly describes generative AI tools as “a rapidly moving target” and notes “precise features and utility to law practice are quickly changing and will continue to change in ways that may be difficult or impossible to anticipate.” Yet the opinion still suggests that a lawyer’s prior experience with a tool may support less independent verification or review.

That is internally inconsistent guidance. If the technology is rapidly changing, then a lawyer’s prior experience with the technology should not be treated as a stable baseline for reduced review. The ABA opinion’s cited authority further undercuts its own advice that legal-specific tools can reliably justify reduced verification. The opinion cites a Stanford study finding that leading legal research companies’ generative AI systems “hallucinate between 17% and 33% of the time.”

The better formulation is this: even legal-specific generative AI tools do not justify a lower level of verification for most legal tasks, especially legal research. Citations, quotations, holdings, and case analysis must be independently verified. Prior experience with a tool may inform how a lawyer uses it, but it cannot serve as a basis for reducing verification.

ABA Formal Opinion 512 also provides incomplete guidance in its contract-review example. The opinion states that a lawyer using generative AI to “review and summarize numerous, lengthy contracts” may not need to manually review the entire set if the lawyer first tests the tool on a smaller subset and finds the summaries accurate.

The problem is that the example does not say enough about the conditions that should govern reliance. It does not address the risk level of the matter, the importance of the contracts being reviewed, the consequences of a missed provision, the representativeness of the sample, or the quality-control measures needed for high-risk documents. Nor does it account for the fact that generative AI systems change. A model, retrieval layer, document parser, prompt structure, or vendor setting may change between the initial subset test and the actual review.

The better guidance would be more precise: subset testing may support limited triage or first-pass review, but any reliance on AI-generated contract summaries should be recent, matter-specific, risk-calibrated, and tied to the actual tool, workflow, and document set being used.

The ABA and the Mississippi Bar were right to address generative AI. Lawyers need guidance. But that guidance must be precise. It should not frame “trust” as the starting point. It should not suggest that legal-specific AI tools permit reduced verification. And it should not treat prior user experience as a stable basis for reliance in a technological environment that the ABA itself describes as rapidly changing.

The better professional rule is straightforward: lawyers may use AI, but they must verify its outputs according to risk. When an output may affect legal advice, court filings, factual representations, or client rights, the lawyer should treat it as unverified until it has been confirmed through independent professional judgment.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *