Editorial Note: The following is a recounting of a panel discussion entitled “Why BOTher Writing” that took place on November 23, 2023, at an educational gathering at the Appellate Judges Education Institute Summit in Washington DC. The panelists included appellate lawyer Mark Davies, Washington and Lee University School of Law Professor Joshua Fairfield, Thomson Reuters Senior Vice President for Product Development Emily Colbert, and Judge Herbert B. Dixon, Jr., (ret.), Superior Court, District of Columbia. The interchange largely focused on “ChatGPT,” an A.I. program with the ability to respond in full text to a question posed by a user.
Judge Herbert Dixon opened the discussion by commenting that artificial intelligence (AI) has become an important part of our lives. He noted that President Biden recently issued new mandates regarding AI and safety, highlighting A.I.’s growing presence throughout society.
This sparked commentary from the audience, and particularly from Judge Boggs, Senior Judge, Sixth Circuit, U.S. Court of Appeals, who posed the following question to the ChatGPT bot: “Name three of Judge Boggs’s most famous cases.” After the bot responded with an answer in full text, Judge Boggs reviewed the answer—only to find that the bot’s response was not accurate. This led to further remarks from Judge Dixon, who explained his own unsuccessful interaction with the ChatGPT chatbot. Judge Dixon explained that after he too had posed a question to ChatGPT, the bot “hallucinated” a response, providing incorrect information.
On the topic of the bot’s potential for “hallucination,” attorney Mark Davies commented that according to his conversations with certain technology-focused individuals, “hallucinations” by A.I. are fixable and will not be a problem over time. Davies then posed the following dilemma—that some lawyers use “hallucination” by A.I. systems such as ChatGPT as an excuse not to use technology. This leads one to ponder…is the potential benefit really worth the risk?
Thomson Reuters’ Emily Colbert then added her take on ChatGPT. She explained that users need to be aware of both the input and the output (i.e., what is being presented to the system, and what the system is generating in response). Emily emphasized the importance of properly educating lawyers on how to use ChatGPT, further explaining that there are several techniques that exist when it comes to using technology. For that reason, Ms. Colbert advocated that it is important for each user to do his or her due diligence before engaging ChatGPT’s advanced technology. And because ChatGPT was trained in a moment in time, she alerted the audience to the inevitable reality in the future, ChatGPT’s data will become “stale.”
According to Professor Fairfield, while the technology itself will improve over time, the hallucinations will not go away and in fact will become more persuasive. This is a frightening and risky proposition—that in the future, the “hallucinations” by ChatGPT will become harder and harder to detect. According to Fairfield, the core problem with ChatGPT is that it is not grounded in common sense (i.e., the system does not recognize context). As an example of this potential for technological failure, Professor Fairfield mentioned Google Maps—noting that while many people were skeptical of Google Maps at first, it is very widely used now, even though it is not always accurate and can lead users in the wrong direction. Following mention of Google Maps, Mr. Davies noted society’s over-reliance on technology.
Mr. Davies then received the following question: Would you use a chat bot at oral argument if permitted to do so? In response, Davies stated that while he does think the bot would be helpful, that does not come without qualification. Specifically, Davies explained that the bot could be used to run a search, but that use would depend on the type of question posed. For example, Davies noted that the bot could be used to run a search on a simple inquiry (with the risk that the answer may be wrong). But if the question posed was more challenging (for example, an inquiry about the heart of a case), Davies explained that he would not use the chat bot in that scenario.
The conversation then shifted to a discussion of how to ensure that assistive technology such as ChatGPT is used properly by law students. Professor Fairfield responded first by emphasizing that technology must be approached from an ethical perspective. According to Fairfield, if the technology is used to write, it should not be used as a substitute for one’s own skill and judgment. Fairfield further explained that it is important to think about whether one’s use of A.I. will help or hurt those who the technology is being used for. He emphasized that users must remain on guard to ensure that use of the technology is helping, and not hurting, those the user is trying to serve (e.g., clients). Fairfield also expressed the importance of making sure that A.I. does not take over the writing of the law, especially where A.I. bots are not a part of human society and do not care about society’s welfare.
The next question posed to the panel included the following inquiry: How should the courts and judges deal with the use of ChatGPT by practitioners, and must lawyers make certification in their pleadings disclosing that they are using A.I.? Mr. Davies responded first, opining that practitioners should not hide from the judge the technology they are using. Davies noted that this is a transition period, and over time, technology may become part of attorneys’ practices. Ms. Colbert then expressed her opinion that the shift from books to technology is a good one. She explained that younger lawyers will start to use A.I.
in law school. According to Ms. Colbert, A.I. is a part of life and lawyers should not turn their backs on it.
The next comments came from Professor Fairfield, who posed the possibility of an “under disclosure problem.” Specifically, Fairfield explained that the use of A.I. is often undetectable, leading to the potential for attorneys to refrain from disclosing their use of the technology. For this reason, Fairfield expressed his opinion that judges need to be clear in their orders about exactly what they require lawyers to disclose. If judges are not clear, this could lead to both under disclosure problems (e.g., attorneys not disclosing to the court that they used A.I. to draft their briefs), and over disclosure problems (e.g., attorneys disclosing too much—for example, an attorney notifying the court that he or she “used Google to run a search”).
Ms. Colbert then jumped in again, assuring the audience that Westlaw is working to ensure that all statements generated by A.I. can be validated (via footnotes which can be used to verify statements made by the bot). She explained that while all users still need to verify the answers generated by A.I., A.I. still saves users time even with this need to verify. She then emphasized the importance of user due diligence, i.e., that anyone using the A.I. models must take reasonable steps to understand what he or she is using and how the technology works before putting it to practical use. Mr. Davies then expressed the overall opinion that while technology may make errors, it can still be a helpful tool.
Several additional considerations were then explored by the panel. First, Professor Fairfield mentioned string cites of cases, noting that A.I. will begin generating string cites it believes are connected. According to Fairfield, this is concerning because that “connection” will not be determined by humans and human thought. And in five years, all connections between cases will be determined by A.I.—which is even more concerning.
Ms. Colbert then commented on the evolving nature of A.I. and its presence. She explained that the smaller the A.I. language model, the less accurate it is likely to be. According to her, this makes the idea of building one’s own A.I. model risky because of the limited size of such an individual model.
Professor Fairfield then expressed his opinion that it is uncomfortable to think about, from a judge’s perspective, that attorneys may use A.I. to write briefs and generate language a particular judge likes and uses. According to Fairfield, in that context, the A.I. would be “massaging” the brief to please the judge.
But in Fairfield’s opinion, the network of meaning needs to be human. He then looked to the future, explaining that years from now, the next generation will be reading connections between cases that A.I. made, rather than connections made by the human mind. This, according to Fairfield, is problematic.
The panel then shifted to closing comments. The first comments came from Mr. Davies, who expressed that there is a genuine concern about: 1) how malicious people may use A.I. technology, and 2) whether A.I. may eventually outsmart its human users. Davies thus emphasized the importance of A.I. safety and keeping A.I. technology “under control.”
Ms. Colbert, in giving her final thoughts, expressed the opinion that it is important for more seasoned lawyers to aid in developing the regulations for A.I. and the rules for how it should be used. Professor Fairfield concluded the session by noting his concern about input and output. He explained that eventually, the output now generated by A.I. will be the input that will go back into A.I.—leading to a “vicious cycle.” According to Fairfield, this poses a “scary” future—one where the “persuasive” statements A.I. generates will become part of the law and the next generation’s understanding of that law.