• Print Page

Ask an Expert

Seth Price on AI: It Is the Real Deal, Not a Fire Drill

September 12, 2023

By Jeremy Conrad

Seth Price, founding partner at Price Benowitz LLC, has a long track record of leveraging emergingSeth Price technology for law firm growth. Price was among the founders of USLaw, an early consumer legal website, and, more recently, BluShark Digital, a company providing digital marketing training and services to the legal industry.

Price frequently speaks and writes about leveraging new opportunities created by emerging technologies, while guarding against the ethical and business risks posed by their usage. The D.C. Bar recently spoke with Price about the increased use of artificial intelligence (AI) in the legal field, and ways to effectively identify and address technology-related challenges and opportunities in the workforce.

How concerned should employers be about the use of AI by interns, summer associates, and new attorneys?

I think it should be on their radar. It’s on the list of headaches that I deal with on a day-to-day basis running two businesses: [the] law firm and law firm support. I think you need to be aware of it, but I don’t think it’s a “sky is falling” situation. Personally, I wish that people were adopting it more. For the layperson who has no knowledge of AI, you don’t know what you don’t know, and use could cause a lot of damage. But if you are aware of the basics and you have basic protocols in place, you should be fine.

I’ll give an analogy. In the marketing world, for lawyers, content needs to go on websites. It has to be high quality, and it has to be authoritative. One of the issues in the past was that people could … copy an article off a website and give it to their employer or contractor, and it would look great. Well, Copyscape came out, and we were able to tell very quickly whether the content existed on the web already.

It was [about] checks and balances. Over the years, both at the law firm and the marketing company, we’ve had people submit articles that were lifted from somewhere else on the web. There are times when someone takes an idea and rewrites and edits it, and that’s fine, but you can tell from Copyscape exactly how much is theirs.

Now, with AI, we have new tools that aren’t necessarily copying verbatim, but it’s still a question of whether content is plagiarized or accurate. Just like you wouldn’t take a college intern’s or 1L’s work for granted and just submit it to a court, you have to be conscious of all work product now and be aware that someone could be cutting a corner.

I have no doubt that now some people will be handed assignments that will be done in AI. Some schools are permitting AI as long as it is sourced. That may be a new trend, where disclosure permits responsible use, but if someone is pretending it is their own work, that’s a problem.

Younger employees are going to bring new technology into the field. They’ll drive adoption. The people in law school right now doing legal research with AI will know the strengths and weaknesses, but that doesn’t mean that, as a business owner, you can bury your head in the sand. You need to know this is out there and can be an issue, but — big picture — there is more good than harm to be done.

How can a firm tell if work was done with AI, and is it a problem?

There are a lot of reasons why you would care about AI content. If content is good, content is good, from a marketing perspective. Right now, AI might sound good when you read it, and it may be good to respond to an email, but it’s not thoughtful content that would replace a lawyer writing an article for a website. It can be filler, but Google can tell the difference, and [it has] a business imperative to do so. [Google doesn’t] own ChatGPT, so they haven’t said that they will definitively prohibit AI-produced content, but they still say that they want authoritative content that is high quality.

So, from a marketing perspective, use AI at your peril. A company in the business of [search engine optimization] isn’t going to reward something that undermines that anytime soon. I’ve seen examples of people penalized for flooding a website with AI content. That’s not well known, but it makes sense. Whether Google is detecting AI use or simply judging it as low-quality content is uncertain, but the outcome is the same.

There are [several] programs that we use to detect AI-produced content that aren’t 100 percent accurate, but they’re pretty good. You’ve got to be careful because there is always a story … I saw a headline about how the Declaration of Independence came up as AI. It has to be taken with a grain of salt, but if I saw someone was constantly handing in material that our AI detectors were flagging, that [would] require follow-up.

Set Price quote about interns using artificial intelligence in their jobs, but making sure to disclose its use to supervisors.

I had a meeting just the other day with a lawyer who was saying, “Why can’t I just have a person in my office [producing] content for my website for pennies on the dollar?” So, we ran our AI detector, and it was flagging all kinds of content. I thought, maybe it’s nothing. The program isn’t entirely accurate, but then we took a meeting with them, and they told us, yes, they were using AI.

But there are other uses that are less problematic. [If] someone hands you something and you read it and it sounds good, does it really matter that it comes from AI? This is the case for many kinds of internal documents like employee manuals. Do you really care if you plagiarize an employee manual? I don’t. As a business owner, I want the best possible employee manual. As a starting point, I think it could be incredibly valuable.

Even in this case, though, some thought has to be put into it. A document created with a simple search and a couple of hours of work … the quality is not going to be there, and that’s going to have consequences.

An intern or junior associate who uses AI in their process and then discloses its use up the chain of command is, in many cases, going to be fine. It’s when use is undisclosed, or generated content goes unedited, that problems arise. Employers today can’t bury their heads in the sand and not acknowledge that it’s out there. Someone will take advantage, and the outcomes [will be] bad.

What are some good starting policies and areas of adoption?

If you are using AI, it should be disclosed. You shouldn’t be turning something in to a supervisor without disclosure of the use, and maybe even, in a perfect world, a signed statement by you [acknowledging the need for disclosure and] review by a supervisor [for public-facing AI content].

Areas where AI should be adopted include internal documentation and crowdsourcing ideas. One use that I’m particularly excited about involves a large document, such as a deposition, where AI could be a very powerful tool to source answers about its contents. These tools go beyond the standard find and replace function. How they can help find and extract information from a large document is incredibly exciting.

AI use in the drafting of emails is awesome. That said, it is drafting, not sending. Often, at a law firm, you may get an email that includes four or five questions. What I’ve seen, at this point, is that AI is great at drafting a response that lays out those questions with proposed answers. The content will need to be carefully reviewed, but the AI has provided a clear format and uses a formality that helps the user produce a good response.

That’s provided you are using AI as a drafting tool. What concerns me, as an employer, is that situation in which it is used to cut corners. Right now, it isn’t a shortcut. AI will produce a great draft that can provide formality, structure, and pleasantries, but the facts themselves are subjective, and a human really needs to go through them.

How are you using AI in your law firm and business?

One of the things that we love to do is take interviews with attorneys and turn them into content. [That] process … can be very laborious. In the past, transcription would be done by overseas labor, or by rudimentary AI, but the transcription process was imprecise, and the results were so rough that there was still a lot of work turning a conversation into publishable content.

What we are working really hard and really passionately on now is trying to get to the point where spoken words can be taken and transformed directly into written content. Not just a transcription, but actual high-quality content. We’re really bullish on that. What we’re not using it for is to replace writers outright. The information search and retrieval aspect of AI is also really exciting.

We’re looking at the importance of databases. Most sophisticated use of AI moves beyond the very basic ChatGPT usage to third-party software that allows you to put more data in. If you want to leverage AI, you need a database that is meaningful. I think that as people realize that the technology shouldn’t [drawn upon] a nameless, faceless, amorphous database, but rather data that you put in … the results are going to be a heck of a lot better.

I’ll give you an example. We have an employee manual at our law firm. We hand them out when someone is hired and, when they are leaving, we ask them to update the manual during their last week. It is the worst possible point to do that. AI has pushed us to make employee manuals a living document.

One of our initiatives allows employees, through Slack, to search their document to get answers. Theoretically, if they went to the document and searched it themselves, they could have found [the answer], but they won’t. Now they can effectively “google” their own documents and get a response back. We’re seeing interface between Slack and ChatGPT, generally. You can message your supervisor for an answer, and now you can message your manual for an answer, too.

AI isn’t something that will take a terrible business and turn it into a good one. It’s a tool that will take a good business and help it get to the next level. I’m torn because there are all these shiny objects out there. A year ago, we were asking about the metaverse and whether people would be putting on headsets and popping into law firms … that didn’t seem to quite go.

You’ve got to be careful about where you put your assets and your time, but AI, this is the real deal. It isn’t a fire drill. We are now in a position to work more efficiently, to be more profitable, and do something you couldn’t do before. You’ve got to be careful you don’t do something nefarious, and where it goes can be really scary, but in the short to medium term there are a lot of exciting and dynamic things that are going to happen.

Skyline