New Research on AI Impact on Critical Thinking for Lawyers


Since the public release of ChatGPT in 2022, a growing body of empirical research has suggested that generative AI systems could fundamentally reshape the practice of law. Yet many actors across the legal ecosystem remain hesitant to embrace these tools. One of the most common concerns is that reliance on AI may erode human legal reasoning and professional judgment. Until recently, however, that concern rested largely on intuition rather than evidence.

In a recent draft article, we provide the first empirical evidence directly addressing how lawyers’ use of AI affects independent legal reasoning. We conducted a randomized controlled trial involving approximately 100 upper-level students at the University of Minnesota Law School. Participants completed a sequence of common lawyering tasks: synthesizing legal source materials, applying those materials to a client’s problem, and revising their initial memo. We randomly assigned participants to either a control group or an AI exposed group. The control group could not use AI until the final revision task. The AI exposed group used AI during the synthesis and revision stages, but not during the intervening application task, which allowed us to observe whether earlier AI use influenced later reasoning when AI was unavailable.

As expected, participants who used AI to help synthesize the legal materials substantially outperformed those who did not. This result is consistent with prior research, but the magnitude of the effect was larger than previously observed, likely because participants used a more advanced AI system. Access to AI increased performance on the synthesis task by roughly 50 to 70 percent, exceeding gains documented in earlier studies using prior generation models. Participants in the AI exposed group also completed the synthesis task significantly faster.

More strikingly, we found that the benefits of AI extended beyond the tasks where it was directly available. Although we initially hypothesized that participants who relied on AI during synthesis might perform worse once AI was removed, we observed the opposite. On the application task, where neither group could use AI, participants who had earlier access to AI performed better than those who never used it. These results suggest that AI’s ability to improve legal reasoning operates indirectly, by improving the quality of initial work on which subsequent work builds.

These results provide important, though necessarily limited, evidence that AI use does not inevitably undermine independent legal reasoning. At the same time, the study has constraints. We relied on law students rather than practicing attorneys, and the experimental tasks were more structured than many real-world assignments. It also remains possible that AI use could weaken understanding in different contexts, particularly when users rely on AI without carefully engaging with its output.

Additional findings from our experiment highlight that risk. During the revision stage, all participants were instructed to use AI to improve the memo they had drafted independently. We designed this phase to isolate the effects of introducing AI after participants had already completed their own analysis. The results were mixed. AI assistance helped participants with weaker initial memos improve their work. But participants who began with strong reasoning memos often produced worse revisions after using AI. This pattern suggests that AI can sometimes displace or dilute careful reasoning, even among relatively strong performers.

Our findings also do not address the long-term effects of sustained AI use. Emerging evidence from other professional domains indicates that heavy reliance on AI may impair durable learning, particularly for individuals who have not yet developed core competencies. That possibility underscores the importance of using AI thoughtfully rather than reflexively.

Taken together, our results point toward practical guidance for lawyers seeking to capture AI’s benefits while minimizing risks. Lawyers and law students should use AI primarily for tasks where they can independently assess, explain, and build on the output. They should confine AI use to narrow, well-defined components of a project rather than delegating entire assignments. Lastly, lawyers and law students should not rely on AI when working under severe time pressure or cognitive fatigue, conditions that increase the likelihood that AI substitutes for careful analysis rather than supporting it.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *