• Print Page

Professional Growth

BNH.AI Experts Seek to Demystify AI Vulnerabilities & Risks in Upcoming CLE

December 19, 2022

By John Murph

Brenda Leong, a partner at BNH.AI, works at the only law firm in the world that’s founded by a partnershipBrenda Leong between lawyers and data scientists. “The District of Columbia is the only [U.S.] jurisdiction that allows co-equal ownership of law firm by a nonlawyers,” Leong says. “The firm’s co-founders took advantage of that opportunity to address what we felt was a real gap in the market.”

On Tuesday, December 20, Leong and Patrick Hall, principal scientist at BNH.AI, will co-lead the D.C. Bar CLE course “Evaluating the Liabilities of Artificial Intelligence 2022,” providing attendees with a legal and technical outline to help them understand and evaluate the risks of cutting-edge technologies like AI and machine learning.

“One of the biggest challenges [for attorneys] is communicating among the computer science developer, the coder tech side of companies, and the legal policy governance side, then hav[ing] each understand the other's needs, requirements, and perspectives, as well as bring into some kind of alignment the values, desired outcomes, and benefits of the systems that were being developed,” Leong explains.

Here, Leong discusses with the D.C. Bar what attorneys can look forward to in attending this course.

Who are your primary audience for this CLE?

This particular presentation is mostly designed to speak to a nontechnical audience. We assume that we're mostly talking to lawyers or related policy people in a company. It does provide a technical review. That's part of our goal — to help people understand the vocabulary, some of the science, the actual operationalization of machine learning, and what that means so that they can then understand the legal liabilities that might incur in various areas.

How should attorneys think about security and privacy risks and AI?

Privacy and security pertaining to computers and digital systems are maturing areas. We've had security issues since we've had computers. We've had privacy for at least 20 to 25 years as a developing field, since the late 1990s. As the internet grew, the concepts of digital privacy, personal information, and securing from unfair uses or uses that were not known to the consumer have been growing.

Since the early 2010s, in particular, privacy policies and privacy impact assessments have become more commonplace. The question is what changes [have been happening] in those two fields because it's artificial intelligence. We try to address some of the new or slightly different risks that a system might have, either in security or in privacy, that are different than what people are used to dealing with in their computerized products, services, and platforms.

One of the founding cornerstones of privacy is data minimization — not collecting or holding data other than what you just absolutely need to do the product or service exchange, and only keeping it as long as you need to complete that service or maintain that relationship with a consumer. In AI, machine learning requires lots of data beyond that ability to really comprehend in some cases. So, how do we reconcile that with privacy concerns for the original source of that data as individual people?

In security, there are some particular vulnerabilities from a machine-learning system that are different than a traditional network. For example, you don't have to hack into a system to mess up a machine-learning-based program. If you know enough about it or how it works, you could do what's called “poisoning,” which is to inject false input data in some form. There are other things like “model extraction,” where you reverse engineer how the model works, and then figure out ways to subvert it. You can also figure out whose data might be in a data set from the outside in. There are just some aspects that are newer and unique to AI that we try to explain to folks who might already have a good founding in those areas.

As far as algorithmic bias [is concerned], that is a whole area in and of itself. Machine learning poses some new risks because human systems are pretty biased. We know that our historical data and systems are fairly biased. It's why we have civil rights laws and anti-discrimination laws. So, the first order of business is just trying to explain that those laws all still apply. You have to show that your model or system is in compliance with … 50 years of legal precedent involving protections in employment, finance, or housing. A lot of people don't really know how to do that, or haven't really thought about the fact that they still have to be able to show that. Then there's the question of what other ways bias potentially is being introduced as a result of these systems. 

What are some of the major takeaways you want attendees to get from this CLE?

We want the lawyer attendees to know that AI operates in the existing legal world. We have to consider all of the current laws. There are a lot of things that do apply already, even though there's no federal privacy law, federal data protection law, or federal AI law. There are state privacy laws, but they don't all call out AI in particular.

Despite all that, there are a lot of things that do apply. The Federal Trade Commission has a lot of things that apply; individual agencies have a lot of things that apply. There are a lot of current legal frameworks that need to be considered for AI systems.

The last thing is, just to try to demystify AI technology. It may feel like an elephant, but you really can start [taking] one bite at a time and work your way through. This is a manageable thing. There is a level of understandability that even if you're not a computer scientist, a programmer, or someone who’s really familiar with that kind of technology, you can still create steps and be able to deal with the challenges of AI technology. 

“Evaluating the Liabilities of Artificial Intelligence 2022” takes place from 12 to 1 p.m. Click here to register.

Skyline