Content provided by Reed Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Reed Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED
A
All About Change


1 Eli Beer & United Hatzalah: Saving Lives in 90 seconds or Less 30:20
30:20
Play Later
Play Later
Lists
Like
Liked30:20
Eli Beer is a pioneer, social entrepreneur, President and Founder of United Hatzalah of Israel. In thirty years, the organization has grown to more than 6,500 volunteers who unite together to provide immediate, life-saving care to anyone in need - regardless of race or religion. This community EMS force network treats over 730,000 incidents per year, in Israel, as they wait for ambulances and medical attention. Eli’s vision is to bring this life-saving model across the world. In 2015, Beer expanded internationally with the establishment of branches in South America and other countries, including “United Rescue” in Jersey City, USA, where the response time was reduced to just two minutes and thirty-five seconds. Episode Chapters (0:00) intro (1:04) Hatzalah’s reputation for speed (4:48) Hatzalah’s volunteer EMTs and ambucycles (5:50) Entrepreneurism at Hatzalah (8:09) Chutzpah (14:15) Hatzalah’s recruitment (18:31) Volunteers from all walks of life (22:51) Having COVID changed Eli’s perspective (26:00) operating around the world amid antisemitism (28:06) goodbye For video episodes, watch on www.youtube.com/@therudermanfamilyfoundation Stay in touch: X: @JayRuderman | @RudermanFdn LinkedIn: Jay Ruderman | Ruderman Family Foundation Instagram: All About Change Podcast | Ruderman Family Foundation To learn more about the podcast, visit https://allaboutchangepodcast.com/ Looking for more insights into the world of activism? Be sure to check out Jay’s brand new book, Find Your Fight , in which Jay teaches the next generation of activists and advocates how to step up and bring about lasting change. You can find Find Your Fight wherever you buy your books, and you can learn more about it at www.jayruderman.com .…
Tech Law Talks
Mark all (un)played …
Manage series 3402558
Content provided by Reed Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Reed Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.
…
continue reading
93 episodes
Mark all (un)played …
Manage series 3402558
Content provided by Reed Smith. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Reed Smith or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://ppacc.player.fm/legal.
Listen to Tech Law Talks for practical observations on technology and data legal trends, from product and technology development to operational and compliance issues that practitioners encounter every day. On this channel, we host regular discussions about the legal and business issues around data protection, privacy and security; data risk management; intellectual property; social media; and other types of information technology.
…
continue reading
93 episodes
All episodes
×T
Tech Law Talks

1 AI explained: Introduction to Reed Smith's AI Glossary 14:56
14:56
Play Later
Play Later
Lists
Like
Liked14:56
Have you ever found yourself in a perplexing situation because of a lack of common understanding of key AI concepts? You're not alone. In this episode of "AI explained," we delve into Reed Smith's new Glossary of AI Terms with Reed Smith guests Richard Robbins , director of applied artificial intelligence, and Marcin Krieger , records and e-discovery lawyer. This glossary aims to demystify AI jargon, helping professionals build their intuition and ask informed questions. Whether you're a seasoned attorney or new to the field, this episode explains how a well-crafted glossary can serve as a quick reference to understand complex AI terms. The E-Discovery App is a free download available through the Apple App Store and Google Play . ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Marcin: Welcome to Tech Law Talks and our series on AI. Today, we are introducing the Reed Smith AI Glossary. My name is Marcin Krieger, and I'm an attorney in the Reed Smith Pittsburgh office. Richard: And I am Richard Robbins. I am Reed Smith's Director of Applied AI based in the Chicago office. My role is to help us as a firm make effective and responsible use of AI at scale internally. Marcin: So what is the AI Glossary? The Glossary is really meant to break down big ideas and terms behind AI into really easy-to-understand definitions so that legal professionals and attorneys can have informed conversations and really conduct their work efficiently without getting buried in tech jargon. Now, Rich, why do you think an AI glossary is important? Richard: So, I mean, there are lots of glossaries about, you know, sort of AI and things floating around. I think what's important about this one is it's written by and for lawyers. And I think that too many people are afraid to ask questions for fear that they may be exposed as not understanding things they think everyone else in the room understands. Too often, many are just afraid to ask. So we hope that the glossary can provide comfort to the lawyers who use it. And, you know, I think to give them a firm footing. I also think that it's, you know, really important that people do have a fundamental understanding of some key concepts, because if you don't, that will lead to flawed decisions, flawed policy, or choices can just miscommunicate with people in connection with you, with your work. So if we can have a firm grounding, establish some intuition, I think that we'll be in a better spot. Marcin, how would you see that? Marcin: First of all, absolutely, I totally agree with you. I think that it goes even beyond that and really gets to the core of the model rules. When you look at the various ethics opinions that have come out in the last year about the use of AI, and you look at our ethical obligations and basic competence under Rule 1.1, we see that ethics opinions that were published by the ABA and by various state ethics boards say that there's a duty on lawyers to exercise the legal knowledge, skill, thoroughness, and preparation necessary for the representation. And when it comes to AI, you have to achieve that competence through some level of self-study. This isn't about becoming experts about AI, but to be able to competently represent a client in the use of generative AI, you have to have an understanding of the capabilities and the limitations, and a reasonable understanding about the tools and how the tech works. To put another way, you don't have to become an expert, but you have to at least be able to be in the room and have that conversation. So, for example, in my practice, in litigation and specifically in electronic discovery, we've been using artificial intelligence and advanced machine learning and various AI products previous to generative AI for well over a decade. And as we move towards generative AI, this technology works differently and it acts differently. And how the technology works is going to dictate how we do things like negotiate ESI protocols, how we issue protective orders, and also how we might craft protective orders and confidentiality agreements. So being able to identify how these types of orders restrict or permit the use of generative AI technology is really important. And you don't want to get yourself into a situation where you may inadvertently agree to allow the other side, the receiving party of your client's data, to do something that may not comply with the client's own expectations of confidentiality. Similarly, when you are receiving data from a producing party, you want to make sure that the way that you apply technology to that data complies with whatever restrictions may have been put in to any kind of protective order or confidentiality agreement. Richard: Let me jump in and ask you something about that. So you've been down this path before, right? This is not the first time professionally you've seen new technology coming into play that people have to wrestle with. And as you were going through the prior use of machine learning and things that inform your work, how have you landed? You know, how often did you get into a confusing situation because people just didn't have a common understanding of key concepts where maybe a glossary like this would have helped or did you use things like that before? Marcin: Absolutely. And it comes, it's cyclic. It comes in waves. Anytime there's been a major advancement in technology, there is that learning curve where attorneys have to not just learn the terminology, but also trust and understand how the technology works. Even now, technology that was new 10 years ago still continues to need to be described and defined even outside of the context of AI things like just removing email threads almost every ESI order that we work with requires us to explain and define what that process looks like when we talk about traditional technology assisted review to this day our agreements have to explain and describe to a certain level how technology-assisted review works. But 10 years ago, it required significant investment of time negotiating, explaining, educating, not just opposing counsel, but our clients. Richard: I was going to ask about that, right? Because. It would seem to me that, you know, especially at the front end, as this technology evolves, it's really easy for us to talk past each other or to use words and not have a common understanding, right? Marcin: Exactly, exactly. And now with generative AI, we have exponentially more terminology. There's so many layers to the way that this technology works that even a fairly skilled attorney like myself, when I first started learning about generative AI technology, I was completely overwhelmed. And most attorneys don't have the time or the technical understanding to go out into the internet and find that information. A glossary like this is probably one of the best ways that an attorney can introduce themselves to the terminology or have a reference where if they see a term that they are unfamiliar with, quickly go take a look at what does that term mean? What's the implication here? Get that two sentence description so that they can say, okay, I get what's going on here or put the brakes on and say, hey, I need to bring in one of my tech experts at this point. Richard: Yeah, I think that's really important. And this kind of goes back to this notion that this glossary was prepared, you know, at least initially, right, for, you know, from the litigator's lens, litigator's perspective. But it's really useful well beyond that. And, you know, I mean, I think the biggest need is to take the mystery out of the jargon, to help people, you know, build their intuition, to ask good questions. And you touched on something where you said, well, I've got a, I don't need to be a technical expert on a given topic, but I need a tight. Accessible description that lets me get the essence of it. So, I mean, a couple of my, you know, favorite examples from the glossary are, you know, in the last year or so, we've heard a lot of people talking about RAG systems and they fling that phrase around, you know, retrieval augmented generation. And, you know, you could sit there and say to someone, yeah, use that label, but what is it? Well, we describe that in three tight sentences. Agentic AI, two sentences. Marcin: And that's a real hot topic for 2025 is agentic AI. Richard: Yep. Marcin: And nobody knows what it is. So I focus a lot on litigation and in particular electronic discovery. So I have a very tight lens on how we use technology and where we use it. But in your role, you deal with attorneys in every practice group and also professionally outside of the law firm. You deal with professionals and technologists. In your experience, how do you see something like this AI glossary helping the people that you work with and what kind of experience levels you get exposed to? Richard: Yeah, absolutely. So I keep coming back to this phrase, this notion of saying it's about helping people develop an intuition for when and how to use things appropriately, what to be concerned about. So a glossary can help to demystify things. These concepts so that you can then carry on whatever it is that you're doing. And so I know that's rather vague and abstract, but I mean, at the end of the day, if you can get something down to a couple of quick sentences and the key essence of it, and that light bulb comes on and people go, ah, now I kind of understand what we're talking about, that will help them guide their conversations about what they should be concerned about or not concerned about. And so, you know, that glossary gives you a starting point. It can help you to ask good questions. It can set alarm bells off when people are saying things that are, you know, perhaps very far off, those key notions. And you have, you know, you have the ability to, you know, I think know when you're out of your depth a little bit, but to know enough to at least start to chart that course. Because right now people are just waving their hands. And that, I think, results in a tendency to say, oh, I can't rely on my own intuition, my own thinking. I have to run away and hide. And I think the glossary makes all this information more accessible so that you can start to interact with the technology and the issues and things around it. Marcin: Yeah, I agree. And I also think that having those two to three sentence hits on what these terms are, I think also will help attorneys know how to ask the right questions. Like you said, know when to get that help, but also know how to ask for it. Because I think that most attorneys know when they need to get help, but they struggle with how to articulate that request for it. Richard: Yeah, I think that's right. And I think that, you know, often we can bring things back to concepts that people are already comfortable with. So I'll spend a lot of time talking to people about sort of generative AI, and their concerns really have nothing to do with the fact that it's generative AI. It just happens to be something that's hosted in the cloud. And we've had conversations about how to deal with information that's hosted in the cloud or not, and we're comfortable having those. But yet, when we get to generative AI, they go, oh, wait, it's a whole new range of issues. I'm like, no, actually, it's not. You've thought about these things before. We can attack these things again. Now, again, the glossary, the point of the glossary is not to teach all this stuff, but it's about to help you get your bearings straight, to get you oriented. And from there, you can have the journey. Marcin: So in order to get onto that journey, we have to let everybody know where they can actually get a copy of the glossary. So the Reed Smith AI Glossary can be found at the website e-discoveryapp.com, or any attorney can go to the Play Store or the Apple Store and download the E-Discovery App, which is a free app that contains a variety of resources. And right on the landing page of the app there's a link for glossaries and within there you'll see a downloadable link that'll give you a PDF version of the AI Glossary which again any attorney can get for free and have available and of course it is a live document which means that we will make updates to it and revisions to it as the technology evolves and as how we present information changes in the coming years. Richard: At that price, I'll take six. Marcin: Thank you, Rich. Thanks for your time. Richard: Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

1 AI explained: Navigating AI in Arbitration - The SVAMC Guideline Effect 37:11
37:11
Play Later
Play Later
Lists
Like
Liked37:11
Arbitrators and counsel can use artificial intelligence to improve service quality and lessen work burden, but they also must deal with the ethical and professional implications. In this episode, Rebeca Mosquera , a Reed Smith associate and president of ArbitralWomen, interviews Benjamin Malek , a partner at T.H.E. Chambers and former chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. They reveal insights and experiences on the current and future applications of AI in arbitration, the potential risks of bias and transparency, and the best practices and guidelines for the responsible integration of AI into dispute resolution. The duo discusses how AI is reshaping arbitration and what it means for arbitrators, counsel and parties. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Rebeca: Welcome to Tech Law Talks and our series on AI. My name is Rebeca Mosquera. I am an attorney with Reed Smith in New York focusing on international arbitration. Today we focus on AI in arbitration. How artificial intelligence is reshaping dispute resolution and the legal profession. Joining me is Benjamin Malek, a partner at THE Chambers and chair of the Silicon Valley Arbitration and Mediation Center AI Task Force. Ben has extensive experience in commercial and investor state arbitration and is at the forefront of AI governance in arbitration. He has worked at leading institutions and law firms, advising on the responsible integration of AI into dispute resolution. He's also founder and CEO of LexArb, an AI-driven case management software. Ben, welcome to Tech Law Talks. Benjamin: Thank you, Rebeca, for having me. Rebeca: Well, let's dive in into our questions today. So artificial intelligence is often misunderstood, or put it in other words, there is a lot of misconceptions surrounding AI. How would you define AI in arbitration? And why is it important to look beyond just generative AI? Benjamin: Yes, thank you so much for having me. AI in arbitration has existed for many years now, But it hasn't been until the rise of generative AI that big question marks have started to arise. And that is mainly because generative AI creates or generates AI output, whereas up until now, it was a relatively mild output. I'll give you one example. Looking for an email in your inbox, that requires a certain amount of AI. Your spellcheck in Word has AI, and it has been used for many years without raising any eyebrows. It hasn't been until ChatGPT has really given an AI tool to the masses that question started arising. What can it do? Will attorneys still be held accountable? Will AI start drafting for them? What will happen? And it's that fear that started generating all this talk about AI. Now, to your question on looking beyond generative AI, I think that is a very important point. In my function as the chair of the SAMC AI Task Force, while we were drafting the guidelines on the use of AI, one of the proposals was to call it use of generative AI in arbitration. And I'm very happy that we stood firm and said no, because there's many forms of AI that will arise over the years. Now we're talking about predictive AI, but there are many AI forms such as predictive AI, NLP, automations, and more. And we use it not only in generating text per se, but we're using it in legal research, in case prediction to a certain extent. Whoever has used LexisNexis, they're using a new tool now where AI is leveraged to predict certain outcomes, document automation, procedural management, and more. So understanding AI as a whole is crucial for responsible adoption. Rebeca: That's interesting. So you're saying, obviously, that AI and arbitration is more than just chat GPT, right? I think that the reason why people think that and relies on maybe, as we'll see in some of the questions I have for you, that people may rely on chat GPT because it sounds normal. It sounds like another person texting you, providing you with a lot of information. And sometimes we just, you know, people, I can understand or I can see why people might believe that that's the correct outcome. And you've given examples of how AI is already being used and that people might not realize it. So all of that is very interesting. Now, tell me, as chair of the SVAMC AI Task Force, you've led significant initiatives in AI governance, right? What motivated the creation of the SVAMC AI guidelines? And what are their key objectives? And before you dive into that, though, I want to take a moment to congratulate you and the rest of the task force on being nominated once again for the GAR Awards, which will be unveiled during Paris Arbitration Week in April of this year. That's an incredible achievement. And I really hope you'll take pride in the impact of your work and the well-deserved recognition it continues to receive. So good luck to you and the rest of the team. Benjamin: Thank you, Rebeca. Thank you so much. It really means a lot, and it also reinforces the importance of our work, seeing that we're nominated not only once last year for the GAR Award, but second year in a row. I will be blunt, I haven't kept track of many nominations, but I think it may be one of the first years where one initiative gets nominated twice, one year after the other. So that in itself for us is worth priding ourselves with. And it may potentially even be more than an award itself. It really, it's a testament to the work we have provided. So what led to the creation of the SVAMC AI guidelines? It's a very straightforward and to a certain extent, a little boring answer as of now, because we've heard it so many times. But the crux was Mata versus Avianca. I'm not going to dive into the case. I think most of us have heard it. Who hasn't? There's many sources to find out about it. The idea being that in a court case, an attorney used Chad GPT, used the outcome without verifying it, and it caused a lot of backlash, not only from opposing party, but also being chastised by the judge. Now when I saw that case, and I saw the outcome, and I saw that there were several tangential cases throughout the U.S. And worldwide, I realized that it was only a question of time until something like this could potentially happen in arbitration. So I got on a call with my dear friend Gary Benton at the SVAMC, and I told him that I really think that this is the moment for the Silicon Valley Arbitration Mediation Center, an institution that is heavily invested in tech to shine. So I took it upon myself to say, give me 12 months and I'll come up with guidelines. So up until now at the SVAMC, there are a lot of think tank-like groups discussing many interesting subjects. But the SVAMC scope, especially AI related, was to have something that produces something tangible. So the guidelines to me were intuitive. It was, I will be honest, I don't think I was the only one. I might have just been the first mover, but there we were. We created the idea. It was vetted by the board. And we came up first with the task force, then with the guidelines. And there's a lot more to come. And I'll leave it there. Rebeca: Well, that's very interesting. And I just wanted to mention or just kind of draw from, you mentioned the Mata case. And you explained a bit about what happened in that case. And I think that was, what, 2023? Is that right? 2022, 2023, right? And so, but just recently we had another one, right? In the federal courts of Wyoming. And I think about two days ago, the order came out from the judge and the attorneys involved were fined about $15,000 because of hallucinations on the case law that they cited to the court. So, you know I see that happening anyway. And this is a major law firm that we're talking about here in the U.S. So it's interesting how we still don't learn, I guess. That would be my take on that. Benjamin: I mean, I will say this. Learning is a relative term because learning, you need to also fail. You need to make mistakes to learn. I guess the crux and the difference is that up until now, at any law firm or anyone working in law would never entrust a first-year associate, a summer associate, a paralegal to draft arguments or to draft certain parts of a pleading by themselves without supervision. However, now, given that AI sounds sophisticated, because it has unlimited access to words and dictionaries, people assume that it is right. And that is where the problem starts. So I am obviously, personally, I am no one to judge a case, no one to say what to do. And in my capacity of the chair of the SVAMC AI task force, we also take a backseat saying these are soft law guidelines. However, submitting documents with information that has not been verified has, in my opinion, very little to do with AI. It has something to do with ethical duty and candor. And that is something that, in my opinion, if a court wants to fine attorneys, they're more welcome to do so. But that is something that should definitely be referred to the Bar Association to take measures. But again, these are my two cents as a citizen. Rebeca: No, very good. Very good. So, you know, drawing from that point as well, and because of the cautionary tales we hear about surrounding these cases and many others that we've heard, many see AI as a double-edged sword, right? On the one hand, offering efficiency gains while raising concerns about bias and procedural fairness. What do you see as the biggest risk and benefits of AI in arbitration? Benjamin: So it's an interesting question. To a certain extent, we tried to address many of the risks in the AI guidelines. Whoever hasn't looked at the guidelines yet, I highly suggest you take a look at them they're available on svamc.org I'm sure that they're widely available on other databases Jus Mundi has it as well. I invite everyone to take a look at it. There are several challenges. We don't believe that those challenges would justify not using it. To name a few, we have bias. We have lack of transparency. We also have the issue of over-reliance, which is the one we were talking about just a minute ago, where it seems so sophisticated that we as human beings, having worked in the field, cannot conceive how such an eloquent answer is anything but true. So there's a black box problem and so many others, but quite frankly, there are so many benefits that come with it. AI is an unlimited knowledge tool that we can use. As of now, AI is what we know it is. It has hallucinations. It does have some bias. There is this black box problem. Where does it come from? Why? What's the source? But quite frankly, if we are able to triage the issues and to really look at what are the advantages and what is it we want to get out of it, and I'll give you a brief example. Let's say you're drafting an RFA. If you know the case, you know the parties, and you know every aspect of the case, AI can draft everything head to toe. You will always be able to tell what is from the case and what's not from the case. If we over-rely on AI and we allow it to draft without verifying all the facts, without making sure we know the transcript inside and out, without knowing the facts of the case, then we will always run into certain issues. Another issue we run into a lot with predictive AI is relying on data that exists. So compared to generative AI, predictive AI is taking data that already exists and predicting another outcome. So there's a lesser likelihood of hallucinations. The issue with that is, of course, bias. Just a brief example, you're the president of Arbitral Women, so you will definitely understand. It has only been in the last 30 years that women had more of a presence in arbitration, specifically sitting as an arbitrator. So if we rely on data that goes beyond those 30, 40, 50 years, there's going to be a lot of male decisions having been taken. Potentially even laws that applied back then that were not very gender neutral. So we need, we as people, need to triage and understand where is the good information, where is information that may have bias and counterbalance it. As of now, we will need to counterbalance it manually. However, as I always say, we've only seen a grain of salt of what AI can do. So as time progresses, the challenges, as you mentioned, will become lesser and lesser and lesser. And the knowledge that AI has will become wider and wider. As of now, especially in arbitration, we are really taking advantage of the fact that there is still scarcity of knowledge. But it is really just a question of time until AI picks up. So we need to get a better understanding of what is it we can do to leverage AI to make ourselves indispensable. Rebeca: No, that's very interesting, Ben. And as you mentioned, yes, as president of ArbitralWomen, the word bias is something I pay close attention. You know, we're talking about bias. You mentioned bias. And we all have conscious or unconscious biases, right? And so you mentioned that about laws that were passed in the past where potentially there was not a lot of input from women or other members of our society. Do you think AI can be trained then to be truly neutral or will bias always be a challenge? Benjamin: I wish I had the right answer. I think, I actually truly believe that bias is a very relative term. And in certain societies, bias has a very firm and black and white standing, whereas in other societies, it does not. Especially in international arbitration, where we not only deal with cross-border disputes, but different cultures, different laws, laws of the seats, laws of the contract. I think it's very hard to point out one set of bias that we will combat or that we will set as principle for everything. I think ultimately what ensures that there is always human oversight in the use of AI, especially in arbitration, are exactly these type of issues. So we can, of course, try to combat bias and gender bias and others. But I don't think it is as easy as we say, because even nowadays, in normal proceedings, we are still dealing with bias on a human level. So I think we cannot ask from machines to be less biased than we as humans are. Rebeca: Let me pivot here a bit. And, you know, earlier, we mentioned the GAR Awards. And now I'd like to shift our focus to the recent GAR Life on Technology that took place here in New York last week on February 20th. And to give our audience, you know, some context. GAR stands for Global Arbitration Review, a widely read journal that not only ranks international arbitration practices at law firms worldwide, but also, among other things, organizes live conferences on cutting-edge topics in arbitration across the globe. So I know you were a speaker at GAR Live, and there was an important discussion about distinguishing generative AI, predictive AI, and other AI applications. How do these different AI technologies impact arbitration, and how do the SVAMC guidelines address them? Benjamin: I was truly honored to speak at the GAR Live event in New York, and I think the fact that I was invited to speak on AI as a testament on how important AI is and how widely interested the community is in the use of AI, which is very different to 2023 when we were drafting the guidelines on the use of AI. I think it is important to understand that ultimately, everything in arbitration, specifically in arbitration, needs human oversight. But in using AI in arbitration, I think we need to differentiate on how the use of AI is different in arbitration versus other parts of the law, and specifically how it is different in arbitration compared to how we would use it on a day-to-day basis. In arbitration specifically, arbitrators are still responsible for a personal or arbitrators are given a personal mandate that is very different to how law works in general. Where you have a lot of judges that let their assistants draft parts of the decision, parts of the order. Arbitration is a little different, and that for a reason. Specifically in international arbitration, because there are certain sensitivities when it comes to local law, when it comes to an international standard and local standards. Arbitrators are held to a higher standard. Using AI as an arbitrator, for example, which could technically be put at the same level as using a tribunal secretary, has its limits. So I think that AI can be used in many aspects, from drafting for attorneys, for counsel, when it comes to helping prepare graphs, when it comes to preparing documents, accumulating documents, etc., etc. But it does have its limits when it comes to arbitrators using it. As we have tried to reiterate in the guidelines, arbitrators need to be very conscious of where their personal mandate starts and ends. In other words, our recommendation, again, we are soft law guidelines, our recommendation to arbitrators are to not use AI when it comes to any decision-making process. What does that mean? We don't know. And neither does the law. And every jurisdiction has their own definition of what that means. It is up for the arbitrator to define what a decision-making process is and to decide of whether the use of AI in that process is adequate. Rebeca: Thank you so much, Ben. I want to now kind of pivot, since we've been talking a little bit more about the guidelines, I want to ask you a few questions about them. So they were created with a global perspective, right? And so what initiatives is the AI task force pursuing to ensure the guidelines remain relevant worldwide? You've been talking about different legal systems and local laws and how practitioners or certain regulations within certain jurisdictions might treat certain things differently. So what is the AI task force doing to remain relevant, to maybe create some sort of uniformity? So what can you tell me about that? Benjamin: So we at the SVAMC task force, we continue to gather feedback, of course, And we're looking for global adaptation. We will continue to work closely with practitioners, with institutions, with lawmakers, with government, to ensure that when it comes to arbitration, AI is given a space, it's used adequately, and if possible, of course, and preferential to us, the SVAMC AI guidelines are used. That's why they were drafted, to be used. When we presented the guidelines to different committees and to different law sections and bar associations, it struck us that jurisdictions such as the U.S., and more specifically in New York, where both you and I are based, the community was not very open to receiving these guidelines as guidelines. And the suggestion was actually made to creating a white paper, And as much as it seemed to be a shutdown at an early stage, when we were thinking about it, and I was very blessed to have seven additional members in the Guidelines Drafting Committee, seven very bright individual members that I learned a lot from during this process. It was clear to us that jurisdictions such as New York have a very high ethical standard, and where guidelines such as our guidelines would potentially be seen as doubling ethical rules. So although we advocate for them not being ethical guidelines whatsoever, because we don't believe they are, we strongly suggest that local and international ethical standards are being upheld. So with that in mind, we realize that there is more to a global aspect that needs to be addressed rather than an aspect of law associations in the US or in the UK or now in Europe. Up-and-coming jurisdictions that up until now did not have a lot of exposure to artificial intelligence and maybe even technology as a whole are rising. And they may need more guidance than jurisdictions where technology may be an instinct away. So what the AI task force has created. And is continuing to recruit for, are regional committees for the AI Task Force, tracking AI usage in different legal systems and different jurisdictions. Our goal is to track AI-related legislation and its potential impact on arbitration. These regional committees will also provide jurisdiction-specific insights to refine the guidelines. And hopefully, or this is what we anticipate, these regional committees will help bridge the gap between AI's global development and local legal framework. There will be a dialogue. We will continue, obviously, to be present at conferences, to have open dialogue, and to recruit, of course, for these committees. But the next step is definitely to focus on these regional committees and to see how we, as the AI task force of the Silicon Valley Arbitration Mediation Center, can impact the use of AI in arbitration worldwide. Rebeca: Well, that's very interesting. So you're utilizing committees in different jurisdictions to keep you appraised of what's happening in each jurisdiction. And then with that, continue, you know, somehow evolving the guidelines and gathering information to see how this field, you know, it's changing rapidly. Benjamin: Absolutely. Initially, we were thinking of just having a small local committee to analyze different jurisdictions and what laws and what court cases, etc. But we soon came to realize that it's much more than tracking judicial decisions. We need people on the ground that are part of a jurisdiction, part of that local law, to tell us how AI impacts their day-to-day, how it may differ from yesterday to tomorrow, and what potential legislation will be enacted to either allow or disallow the use of certain AI. Rebeca: That's very interesting. I think it's something that will keep the guidelines up to date and relevant for a long time. So kudos to you, the SVAMC and the task force. Now, I know that the guidelines are a very short paper, you know, and then in the back you have the commentary on them. So I want to, I'm not going to dissect all of the guidelines, but I want to come and talk about one of them in particular that I think created a lot of discussion around the guidelines itself. So for full disclosure, right, I was part of the reviewing committee of the AI guidelines. And I remember that one of the most debated aspects of the SVAMC AI guidelines is guideline three on disclosure, right? So should arbitrators and counsel disclose their AI use in proceedings? So I think that that has generated a lot of debates. And that's the reason why we have the resulting guideline number three, the way it is drafted. So can you give us a little bit more of insight what happened there? Benjamin: Absolutely. I'd love to. Guideline three was very controversial from the get-go. We initially had two options. We had a two-pronged test that parties would either satisfy or not, and then disclosure was necessary. And then we had another option that the community could vote on where it was up to the parties to decide whether their AI-aided submission could impact the outcome of the case. And depending on that, they would disclose or not disclose whether AI was used. Quite frankly, that was a debate we had in 2023, and a lot changed from November 2023 until April, when we finally published the first version of the AI guidelines. A lot of courts have implemented an obligatory disclosure. I think people have also gotten more comfortable with using AI on a day-to-day. And we ultimately came to the conclusion to opt for a flexible disclosure approach, which can now be found in the guidelines. The reason for that was relatively simple, or relatively simple to us who debated that. Having a disclosure obligation of the use of AI will very easily become inefficient for two reasons. A blanket disclosure for the use of AI serves nobody. It really boils down to one question, which is, if the judge, or in our case in arbitration, if the arbitrator or tribunal knows that AI was used for a certain document, now what? How does that knowledge transform into action? And how does that knowledge lead to a different outcome? And in our analysis, it turned out that a blanket disclosure of AI usage, or in general, an over-disclosure of the use of AI in arbitration, may actually lead to adverse consequences for the parties who make the disclosure. Why? Because not knowing how AI can impact these submissions causes arbitrators not to know what to do with that disclosure. So ultimately, it's really up to the parties to decide, how was AI used? How can it impact the case? What is it I want to disclose? How do I disclose? It's also important for the arbitrators to understand, what do I do with the disclosure before saying, everything needs to be disclosed. During the GAR event in New York, the issue was raised whether documents which were prepared with the use of AI should be disclosed or whether there should be a blanket disclosure. And quite frankly, the debate went back and forth, but ultimately it comes down to cross-examination. It comes down to the expert or the party submitting the document, being able to back up where the information comes from rather than knowing that AI was used. And if you put that in aspect, we received a very interesting question of why we should continue using AI, knowing that approximately 30% of its output are hallucinations and it needs revamping. This was compared to a summer associate or a first-year associate, and the question was very simple. If I have a first-year associate or a summer associate whose output has a 30% error rate, why would I continue using that associate? And quite frankly, there is merit to the question, and it really has a very simple answer. And the answer is time and money. Using AI makes it much faster to receive using AI makes it faster to receive output than using a first year associate or summer associate and it's way cheaper. For that, it's worth having a 30% error margin. I don't know where they got the 30% from, but we just went along with it. Rebeca: I was about to ask you where they get the 30%. And well, I think that for first-year associates or summer associates that are listening, I think that the main thing will be for them to then become very savvy in the use of AI so they can become relevant to the practice. I think everyone, you know, there's always that question about whether AI will replace all of us, the entire world, and we'll go into machine apocalypses. I don't see it that way. In my view, I see that if we, you know, if we train ourselves, if we're not afraid of using the tool, we'll very much be in a position to pivot and understand how to use it. And when you have, what is the saying, garbage in, garbage out. So if you have a bad input, you will have a bad output. You need to know the case. You need to know your documents to understand whether the machine is hallucinating or giving you, you know, an information that is not real. I like to play and ask certain questions to chat GPT, you know, here and there. And sometimes I, you know, I ask obviously things that I know the answer to. And then I'm like, chat GPT, this is not accurate. Can you check on this? And he's like, oh, thank you for correcting me. I mean, and it's just a way of, you got to try and understand it so you know where to make improvements. But that doesn't mean that the tool, because it's a tool, will come and replace, you know, your better judgment as a professional, as an attorney. Benjamin: Absolutely. One of the things we say is it is a tool. It does nothing out of its own volition. So what you're saying is 100% right. This is what the SVAMC AI guidelines stand for. Practitioners need to accustom themselves on proper use of AI. AI can be used from paid versions to unpaid versions. We just need to understand what is an open source AI, what is a close circuit AI. Again, for whoever's listening, feel free to look up the guidelines. There's a lot of information there. There's tons of articles written at this point. And just be very mindful of if there is an open AI system, such as an unpaid chat GPT version. It does not mean you cannot use it. First, check with your firm to make sure you're allowed to use it. I don't want to get into any trouble. Rebeca: Well, we don't want to put confidential information on an open AI platform. Benjamin: Exactly. Once the firm or your colleagues allow you to use ChatGPT, even if it's an open version, just be very smart about what it is you're putting in. No confidential information, no potential conflict check, no potential cases. Just be smart about what it is you put in. Another aspect we were actually debating about is this hallucination. Just an example, let's say you say this is an ISDS case, so we're talking a little more public, and you ask Chad GPT, hey, show me all the cases against Costa Rica. And it hallucinates, too. It might actually be that somebody input information for a potential case against Costa Rica or a theoretical case against Costa Rica, Chad GPT being on the open end, takes that as one potential case. So just be very smart. Be diligent, but also don't be afraid of using it. Rebeca: That's a great note to end on. AI is here to stay. And as legal professionals, it's up to us to ensure it serves the interests of justice, fairness, and efficiency. And for those interested in learning more about the SVAMC AI guidelines, you can find them online at svamc.org and search for guidelines. I tried it myself and you will go directly to the guidelines. And if you like to stay updated on developments in AI and arbitration, be sure to follow Tech Law Talks and join us for future episodes where we'll continue exploring the intersection of law and technology. Ben, thank you again for joining me today. It's been a great pleasure. And thank you to our listeners for tuning in. Benjamin: Thank you so much, Rebeca, for having me and Tech Law Talks for the opportunity to be here. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

1 AI explained: The EU AI Act, the Colorado AI Act and the EDPB 22:33
22:33
Play Later
Play Later
Lists
Like
Liked22:33
Partners Catherine Castaldo , Andy Splittgerber , Thomas Fischl and Tyler Thompson discuss various recent AI acts around the world, including the EU AI Act and the Colorado AI Act, as well as guidance from the European Data Protection Board (EDPB) on AI models and data protection. The team presents an in-depth explanation of the different acts and points out the similarities and differences between the two. What should we do today, even though the Colorado AI Act is not in effect yet? What do these two acts mean for the future of AI? ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Catherine: Hello, everyone, and thanks again for joining us on Tech Law Talks. We're here with a really good array of colleagues to talk to you about the EU AI Act, the Colorado AI Act, the EDPB guidance, and we'll share some of those initials soon on what they all mean. But I'm going to let my colleagues introduce themselves. Before I do that, though, I'd like to say if you like our content, please consider giving us a five-star review wherever you find us. And let's go ahead and first introduce my colleague, Andy. Andy: Yeah, hello, everyone. My name is Andy Splittgerber. I'm a partner at Reed Smith in the Emerging Technologies Department based out of Munich in Germany. And looking forward to discussing with you interesting data protection topics. Thomas: Hello, everyone. This is Thomas, Thomas Fischl in Munich, Germany. I also focus on digital law and privacy. And I'm really excited to be with you today on this podcast. Tyler: Hey everyone, thanks for joining. My name is Tyler Thompson. I'm a partner in the emerging technologies practice at Reed Smith based in the Denver, Colorado office. Catherine: And I'm Catherine Castaldo, a partner in the New York office. So thanks to all my colleagues. Let's get started. Andy, can you give us a very brief overview of the EU AI app? Andy: Sure, yeah. It came into force in August 2024. And it is a law about mainly the responsible use of AI. Generally, it is not really focused on data protection matters. Rather, it is next to the world-famous European Data Protection Regulation. It has a couple of passages where it refers to the GDPR and also sometimes where it states that certain data protection impact assessments have to be conducted. Other than that, it has its own concept dividing up AI systems. And we're just expecting a new guidance on how authorities and how the commission interprets what AI systems are. So watch out for that. Into different categories, prohibited AI, high-risk AI, and then normal AI systems. There are also special rules on generative AI, and then some rules on transparency requirements when organizations use AI towards ends customers. And depending on these risk categories, there are certain requirements, and attaching to each of these categories, developers, importers, and also users as like organizations of AI have to comply with certain obligations around accountability, IT security, documentation, checking, and of course, human intervention and monitoring. This is the basic concept and the rules start to kick in February 2nd, 2025 when prohibited AI must not be used anymore in Europe. And the next bigger wave will be on August 2nd, 2025 when the rules on generative AI kick in. So organizations should start and be prepared to comply with these rules now and get familiar with this new type of law. It's kind of like a new area of law. Catherine: Thanks for that, Andy. Tyler, can you give us a very brief overview of the Colorado AI Act? Tyler: Sure, happy to. So Colorado AI Act, this is really the first comprehensive AI law in the United States. Passed at the end of the 2024 legislative session. it covers developers or deployers that use a high-risk AI system. Now, what is a high-risk AI system? It's just a system that makes a consequential decision. What is a consequential decision? These can include things like education decisions, employment opportunities, employment related decisions, financial lending service decisions, if it's an essential government service, a healthcare service, housing, insurance, legal services. So that consequential decision piece is fairly broad. The effective date of it is February 1st of 2026, and the Colorado AG is going to be enforcing it. There's no private right of action here, but violating the Colorado AEI Act is considered an unfair and deceptive trade practice under Colorado law. So that's where you get the penalties of the Colorado AEI Act. It's tied into the Colorado deceptive trade practice. Catherine: That's an interesting angle. And Tom, let's turn to you for a moment. I understand that the European Data Protection Board, or EDPB, has also recently released some guidance on data protection in connection with artificial intelligence. Can you give us some high-level takeaways from that guidance? Thomas: Sure, Catherine, and it's very true that the EDPB has just released a statement. It actually has been released in December of last year. And yeah, they have released that highly anticipated statement on AI models and data protection. This statement of the EDPB follows actually a much-discussed paper published by the German Hamburg Data Protection Authority in July of last year. And I also wanted to briefly touch upon this paper. Because the Hamburg Authority argued that AI models, especially large language models, are anonymous when considered separately. They do not involve the processing of personal data. To reach this conclusion, the paper decoupled the model itself from, firstly, the prior training of the model, which may involve the collection and further processing of personal data as part of the training data set. And secondly, the subsequent use of the model, where a prompt may contain personal data and output may be used in a way that means it represents personal data. And interestingly, this paper considered only the AI model itself and concluded that the tokens and values that make up the inner processes of a typical AI model do not meaningfully relate to or correspond with information about identifiable individuals. And consequently, the model itself was classified as anonymous, even if personal data is processed during the development and the use of the model. So the EDPB statement, recent statement, does actually not follow this relatively simple and secure framework proposed by the German authority. The EDPB statement responds actually to a request from the Irish Data Protection Commission and gives kind of a framework, just particularly with respect to certain aspects. It actually responds to four specific questions. And the first question was, so under what conditions can AI models be considered anonymous? And the EDPB says, well, yes, it can be considered anonymous, but only in some cases. So it must be impossible with all likely means to obtain personal data from the model either through attacks aimed at extracting the original training data or through other interactions with the AI model. The second and third questions relate to the legal basis of the use and the training of AI models. And the EDPB answered those questions in one answer. So the statement indicates that the development and use of AI models can. Generally be based on a legal basis of legitimate interest, then the statement lists a variety of different factors that need to be considered in the assessment scheme according to Article 6 GDPR. So again, it refers to an individual case-by-case analysis that has to be made. And finally, the EDPB addresses the highly practical question of what consequences it has for the use of an AI model if it was developed in violation of data protection regulations. The EDPB says, well, this partly depends on whether the EI model was first anonymized before it was disclosed to the model operator. And otherwise, the model operator may need to assess the legality of the model's development as part of their accountability obligations. So quite interesting statement. Catherine: Thanks, Tom. That's super helpful. But when I read some commentary on this paper, there's a lot of criticism that it's not very concrete and doesn't provide actionable guidance to businesses. Can you expand on that a little bit and give us your thoughts? Thomas: Yeah, well, as is sometimes the case with these EDPB statements, which necessarily reflect the consensus opinion of authorities from 27 different member states. The statement does not provide many clear answers. So instead, the EDPP offers kind of indicative guidelines and criteria and calls for case-by-case assessments of AI models to understand whether and how they are affected by the GDPR. And interestingly, someone has actually counted how often the phrase case-by-case appears in the statement. It appears actually 16 times. and can or could appears actually 161 times so. Obviously, this is likely to lead to different approaches among data protection authorities, but it's maybe also just an intended strategy of the EDPB. Who knows? Catherine: Well, as an American, I would read that as giving me a lot of flexibility. Thomas: Yeah, true. Catherine: All right, let's turn to Andy for a second. Andy, also in view of the AI Act, what do you now recommend organizations do when they want to use generative AI systems? Andy: That's a difficult question after 161 cans and goods. We always try to give practical advice. And I mean, with regard, like if you now look at the AI Act plus this EDPB paper or generally GDPR, there are a couple of items where organizations can prepare and need to prepare. First of all, organizations using generative AI must be aware that a lot of the obligations is on the developers. So the developers of generative AI definitely have more obligations, especially under the AI Act, for example. They have to create and maintain the model's technical documentation, including the training and testing processes, monitor the AI system. They must also, which can be really painful and will be painful, they have to make available a detailed summary of the content that was used for the training for the model. And this goes very much also into copyright topics. So there are a lot of obligations and none of these are on the using side. So if organizations use generative AI, they don't have to comply with all of this, but they have to, and that's our recommendation, ensure in their agreements when they license the model or the AI system, get the confirmation by the developer that the developer complies with all of these obligations. That's kind of like the supply chain compliance in AI. So that's one of the aspects from the using side. Make sure in your agreement that the provider complies with AI Act. Other items for the agreement when licensing AI, generative AI systems or AI is attaching to what Thomas said. Getting a statement from the developer whether or not the model itself contains personal data. The ideal answer is no, the model does not contain personal data because then we don't have the poisonous tree. If the developer was not in compliance with GDPR or data protection laws when doing the training, there is a cut. If the model does not contain any personal data, then this cannot infect the later use by the using organization. So this is a very important statement. We have not seen this in practice very often so far, and it is quite a strong commitment developers are asked to give, but it is something at least to be discussed in the negotiations. So that's the second point. A third point for the agreement with the provider is whether or not the usage data is used for further training that can create data protection issues and might require using organizations to solicit consent or other justifications from their employees or users. And then, of course, having in place a data processing agreement with the provider or developer of the generative AI system if it runs on someone else's systems. So these are all items for the contracts, and we think this is something that needs to be tackled now because it always takes a while until the contract is negotiated and in place. And on top of this, as I said, the AI Act obligations are rather limited. There's only some transparency only, but it's transparency obligations for using organizations to, for example, inform their employees that they're using AI to inform end users that a certain whatever text or photo or article was created by AI. So like a tag, this was created by AI being transparent that AI was used to develop something. And then on top of this, the general GDPR compliance requirements apply, like transparency about what personal data is processed when the AI is used. Justification of processing, add the AI system to your role paths, and also check if potentially data protection impact assessment is required. This will mainly be the case if the AI has intensive impact on the personality of data subjects' data. So these are the general requirements. So takeaways, look, check the contracts, check the limited transparency requirements under AI Act, and comply with what you know already under GDPR. Tyler: It's interesting because there is a lot of overlap between the EU AI Act and the Colorado AI Act. But Colorado, it does have that robust impact assessment requirements. You know, you've got to provide notification. You have to provide opt-out rights and appeal. You do have some of that publicly facing notice requirement as well. And so the one thing that I think I want to highlight that's a little bit different, we have an AG notification requirement. So if you discover that your artificial intelligence system has been creating an effect that could be considered algorithmic discrimination, you have an affirmative duty to notify the attorney general. So that's something that's a little bit different. But I think overall, there's a lot of overlap between the Colorado AI Act and the EU AI Act. And I like Andy's analogy of the supply chain, right? Colorado as well. Yes, it applies to the developers, but it also applies to deployers. And on the deployer side, it is kind of that supply chain type of analogy of these are things that you as a deployer, you need to go back, look at your developer, make sure you have the right documentation, that you've checked the right boxes there and have done the right things. Catherine: Thanks for that, Tyler. Do you think we're entering into an area where the U.S. States might produce more AI legislation? Tyler: I think so. Virginia has proposed a version of basically the Colorado AI Act. And I honestly think we could see the same thing with these AI bills that we have seen with privacy on the US side, which is kind of a state-specific approach. Some states adopting the same or highly similar versions of the laws of other states, but then maybe a couple states going off on their own and doing something unique. So it would not be surprising to me at all, at least in the short to midterm. We have a patchwork of AI laws throughout the United States just based on individual state law. Catherine: Thanks for that. And I'm going to ask a question to both Tyler and Tom and Andy. Either one of you can answer, whoever thinks of this. But we've been reading a lot lately about DeepSeek and all the cyber insecurities, essentially, with utilizing a system like that and some failures on the part of the developers there. Is there any security requirement in either one of the EU or Colorado-based AI acts for deploying or developing a new system? Tyler: Yeah, for sure. So where your security requirements are going to come in, I think, is in the impact assessment piece, right? Where, you know, when you have to look at your risks and how this could affect an individual, whether through a discrimination issue or other type of risk to it, you're going to have to address that in the discrimination piece. So while it's not like a specific security provision, there's no way that you're going to get around some of these security requirements because you have to do that very robust impact assessment, right? Part of that analysis under the impact assessment is known or reasonably foreseeable risks. So things like that, you're going to have to, I would say, address via some of the security requirements facing the AI platform. Catherine: Great. And what about from the European side? Andy: Yes, similar from the European side or perhaps even a bit more, definitely robustness, cybersecurity, IT security is like a major portion of the AI Act. So that's definitely a very, very important obligation and duty that must be compliant. Catherine: And I would think too under GDPR, because you have to ensure adequate technical and organizational measures that if you had personal information going into the AI system, you'd have to comply with that requirement as well, since they stand side by side. Andy: Exactly, exactly. And then there's under both also notification obligations if something goes wrong. Catherine: Well, good to know. All right, well, maybe we'll do a future podcast on the impact of the NIST AI risk management framework and the application to both of these large bodies of law. But I thank all my colleagues for joining us today. We have time for just a quick final thought. Does anyone have one? Andy: Thought from me after the AI Act came into force now, I'm as a practical European worried that we're killing the AI industry and innovation in Europe. It's kind of like good to see that at least some states in the U.S. follow a bit of a similar approach, even if it's, you know, different. Perhaps I haven't given up the hope for a more global solution. Perhaps the AI Act will be also adjusted a bit to then come more to a closer global solution. Tyler: On the U.S., I'd say, look, my takeaway is start now, start thinking about some of this stuff now. It can be tempting to say it's just Colorado. You know, we have till February of 2026. I think a lot of these things that the Colorado AI Act and even the EU AI Act are requiring are arguably things that you should be doing anyway. So I would say start now, especially as Andy said, on the contract side, if nothing else. We'd start thinking about doing a deal with a developer or a deployer. What needs to be in that agreement? How do we need to protect ourselves? And how do we need to look at the regulatory space to future-proof this so that when we come to 2026, we're not amending 30, 40 contracts? Thomas: And maybe a final thought from my side. So the EDPB statement does only answer a few questions, actually. So it doesn't touch other very important issues like automated decision-making. There is nothing in the document. There is not really anything about sensitive data. The use of sensitive data, data protection impact assessments are not addressed. So a lot of topics that remain unclear, at least there is no guidance yet. Catherine: Those are great views and I'm sure really helpful to all of our listeners who have to think of these problems from both sides of the pond. And thank you for joining us again on Tech Law Talks. We look forward to speaking with you again soon. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

1 Navigating NIS2: What businesses need to know 21:17
21:17
Play Later
Play Later
Lists
Like
Liked21:17
Catherine Castaldo , Christian Leuthner and Asélle Ibraimova dive into the implications of the new Network and Information Security (NIS2) Directive, exploring its impact on cybersecurity compliance across the EU. They break down key changes, including expanded sector coverage, stricter reporting obligations and tougher penalties for noncompliance. Exploring how businesses can prepare for the evolving regulatory landscape, they share insights on risk management, incident response and best practices. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Catherine: Hi, and welcome to Tech Law Talks. My name is Catherine Castaldo, and I am a partner in the New York office in the Emerging Technologies Group, focusing on cybersecurity and privacy. And we have some big news with directives coming out of the EU for that very thing. So I'll turn it to Christian, who can introduce himself. Christian: Thanks, Catherine. So my name is Christian Leuthner. I'm a partner at the Reed Smith Frankfurt office, also in the Emerging Technologies Group, focusing on IT and data. And we have a third attorney on this podcast, our colleague, Asélle. Asélle: Thank you, Christian. Very pleased to join this podcast. I am counsel based in Reed Smith's London office, and I also am part of emerging technologies group and work on data protection, cybersecurity, and technology issues. Catherine: Great. As we previewed a moment ago, on October 17th, 2024, there was a deadline for the transposition of a new directive, commonly referred to as NIS2. And for those of our listeners who might be less familiar, would you tell us what NIS2 stands for and who is subject to it? Christian: Yeah, sure. So NIS2 stands for the Directive on Security of Network and Information Systems. And it is the second iteration of the EU's legal framework for enhancing the cybersecurity of critical infrastructures and digital services, it will replace what replaces the previous directive, which obviously is called NIS1, which was adopted in 2016, but had some limitations and gaps. So NIS2 applies to a wider range of entities that provide essential or important services to the society and the economy, such as energy, transport, health, banking, digital infrastructure, cloud computing, online marketplaces, and many, many more. It also covers public administrations and operators of electoral systems. Basically, anyone who relies on network and information systems to deliver their services and whose disruptions or compromise could have significant impacts on the public interest, security or rights of EU citizens and businesses will be in scope of NIS2. As you already said, Catherine, NIS2 had to be transposed into national member state law. So it's a directive, not a regulation, contrary to DORA, which we discussed the last time in our podcast. It had to be implemented into national law by October 17th, 2024. But most of the member states did not. So the EU Commission has now started investigations regarding the violations of the treaty of the functioning of the European Union against, I think, 23 member states as they have not yet implemented NIS2 into national law. Catherine: That's really comprehensive. Do you have any idea what the timeline is for the implementation? Christian: It depends on the state. So there are some states that have already comprehensive drafts. And those just need to go through the legislative process. In Germany, for example, we had a draft, but we have elections in a few weeks. And the current government just stated that they will not implement the law before that. And so after the election, the implementation law will be probably discussed again, redrafted. And so it'll take some time. It might be in the third quarter of this year. Catherine: Very interesting. We have a similar process. Sometimes it happens in the States where things get delayed. Well, what are some of the key components? Asélle: So, NIS2 focuses on cybersecurity measures, and we need to differentiate it from the usual cybersecurity measures that any organization thinks about in the usual way where they protect their data, their systems against cyber attacks or incidents. So the purpose of this legislation is to make sure there is no disruption to the economy or to others. And in that sense, the similar kind of notions apply. Organizations need to focus on ensuring availability, authenticity, integrity, confidentiality of data and protect their data and systems against all hazards. These notions are familiar to us also from the GDPR kind of framework. So there are 10 cybersecurity risk management measures that NIS2 talks about, and this is policies on risk analysis and information system security, incident handling, business continuity and crisis management, supply chain security. Security in systems acquisition, development, and maintenance, policies to assess the effectiveness of measures, basic cyber hygiene practices, and training, cryptography and encryption, human resources security training, use of multi-factor authentication. So these are familiar notions also. And it seems the general requirements are something that organizations will be familiar with. However, the European Commission in its NIS Investments Report of November 2023 has done research, a survey, and actually found that organizations that are subject to NIS2 didn't really even take these basic measures. Only 22% of those surveyed had third-party risk management in place, and only 48% of organizations had top management involved in approving cybersecurity risk policies and any type of training. And this reduces the general commitment of organizations to cybersecurity. So there are clearly gaps, and NAS2 is trying to focus on improving that. There are other couple of things that I wanted to mention that are different from NIS1 and are important. So as Christian said, essential entities are different, have different regime, compliance regime applied to them compared with important entities. Essential entities need to systematically document their compliance and be prepared for regular monitoring by regulators, including regular inspections by competent authorities, whereas important entities only are obliged to kind of be in touch and communicate with competent authorities in case of security incidents. And there is an important clarification in terms of the supply chain, these are the questions we receive from our clients. And the question is, does the supply chain mean anyone that provides services or products? And from our reading of the legislation, supply chain only relates to ICT products and ICT services. Of course, there is a proportionality principle employed in this legislation, as with usually most of the European legislation, and there is a size threshold. The legislation only applies to those organizations who exceed the medium threshold. And two more topics, and I'm sorry that I'm kind of taking over the conversation here, but I thought the self-identification point was important because in the view of the European Commission, the original NIS1 didn't cover the organizations it intended to cover and so in the European Commission's view, the requirements are so clear in terms of which entities it applies to, that organizations should be able to assess it and register, identify themselves with the relevant authorities by April this year. And the last point, digital infrastructure organizations, their nature is specifically kind of taken into consideration, their cross-border nature. And if they provide services in several member states, there is a mechanism for them to register with the competent authority where their main establishment is based, similar to the notion under the GDPR. Catherine: It sounds like, though, there's enough information in the directive itself without waiting for the member state implementation that companies who are subject to this rule could be well on their way to being compliant by just following those principles. Christian: That's correct. So even if the implementation international law is currently not happening. All of the member states, companies can already work to comply with NIS2. So once the law is implemented, they don't have to start from zero. NIS2 sets out the requirements that important and essential entities under NIS2 have to comply with. For example have a proper information security management system have supply chain management train their employees and so they can already work to implement NIS2 and the the directive itself also has an access that sets out the sectors and potential entities that might be in scope of NIS2 And the member states cannot really vary from those annexes. So if you are already in scope of NIS2 under the information that is in the directive itself, you can be sure that you would probably also have to comply with your national rules. There might be some gray areas where it's not fully clear if someone is in scope of NIS2 and those entities might want to wait for the national implementation. And it also can happen that the national implementation goes beyond the directive and covers sectors or entities that might not be in scope under the directive itself. And then of course they will have to work to implement the requirements then. I think a good starting point anyways is the existing security program that companies already hopefully have in place so if they for example have an ISO 27001 framework implemented it might be good to start but with a mapping exercise what NIS2 might require in addition to the ISO 27001. And then look if this should be implemented now or companies can wait for the national implementation. But it's recommended not to wait for the national implementation and don't do anything until then. Asélle: I agree with that, Christian. And I would like to point out that, in fact, digital infrastructure entities have very detailed requirements for compliance because there was an implementing regulation that basically specifies the cybersecurity requirements under NIS2. And just to clarify, perhaps digital infrastructure entities that I'm referring to are DNS service providers, TLD name, registries, cloud service providers, data centers. Content delivery network providers, managed service providers, managed security service providers, online marketplaces, online search engines, social networking services, and trust service providers. So the implementing regulation is in fact binding and directly applicable in all member states. And the regulation is quite detailed and has specific requirements in relation to each cybersecurity measure. Importantly, it has detailed thresholds on when incidents should be reported, and we need to take into consideration that not any incident is reportable, only those incidents that are capable of causing significant disruption to the service or significant impact on the provision of the services. So please take that into consideration. And NISA also published implementing guidance, and it's 150 pages, just explaining what the implementing regulation means. And it's still a draft. The consultation ended on the 9th of January 2025, so there'll be further guidance on that. Catherine: Well, we can look forward to that. But I guess the next question would be, what are some of the risks for noncompliance? Christian: Noncompliance with NIS2 can have serious consequences for the entity's concern, both legal and non-legal. On the legal side, NIS2 empowers the national authorities to impose sanctions and penalties, breaches. They can range from warnings and orders to fines and injunctions. Depending on the severity and duration of the infringement. The sanctions can be up to 2% of the annual turnover or 10 million euros, whatever is higher for the essential entities, and up to 1.4% of the annual turnover or 7 million euros, whichever is higher for important entities. NIS2 also allows the national authorities to take corrective or preventive measures. They can suspend or restrict the provision of the services and take the or order the entities to take remedial actions or improve the security posture. So even if they have implemented security measures and the authorities understand or determine that they are not sufficient in light of the risk applicable to the entity, they can require them to implement other measures to increase the security. On the non-legal side, it's very similar to what we discussed in our DORA podcast. There can be civil liability if there is an incident, if a damage occurs. And of course, the reputational damage and loss of trust and confidence can be really, really severe for the entities if they have an incident. And it's huge because they did not comply with the NIS2 requirements. Asélle: I wanted to add that, unfortunately, with this piece of legislation, member states can add to the list of entities to which this legislation will apply. They can apply higher cybersecurity requirements, and because of the new criteria and new entities being added, it now applies to twice as many sectors as before. So quite a few organizations will need to review their policies, take cybersecurity measures. And it's helpful, as Christian mentioned, that, you know, NIS already mapped the cybersecurity measures against existing standards. It's on its website. I think it's super helpful. And it's likely that, the cybersecurity measures and the general risk assessment will be done by cybersecurity teams and risk compliance teams within organizations. However, legal will also need to be involved. And often policies, once drafted, they're reviewed by in-house legal teams. So it's essential that they all work together. It's also important to mention that there will be an impact on the due diligence and contracts with ICT product providers and ICT service providers. So the due diligence processes will need to be reviewed and enhanced and contracts drafted to ensure they will allow the organization, the recipients of the services to be compliant with NIS2. And maybe last point, just to cover off the UK, what's happening in the UK for those who also have operations there. It is clear now that the government will implement a version of NIS2. It's going to follow the European Union in its steps. And we recently were informed of a government page on the new cybersecurity and resilience bill. It's clear that it's going to be covering five sectors, transport, energy, drinking, water, health, and digital infrastructure. And digital services, very similar to NIS2, such as online marketplaces, online search engines, and cloud computing services. We are expecting the bill to be introduced to Parliament this year. Catherine: Wow, fantastic news. So it should be a busy cybersecurity season. If any of our listeners think that they need help and think that they may be subject to these rules, I'm sure my colleagues, Asélle and Christian, would be happy to help with the legal governance side of this cybersecurity compliance effort. So thank you very much for sharing all this information, and we'll talk soon. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

Tyler Thompson sits down with Abigail Walker to break down the Colorado AI Act, which was passed at the end of the 2024 legislative session to prevent algorithmic discrimination. The Colorado AI Act is the first comprehensive law in the United States that directly and exclusively targets AI and GenAI systems. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Tyler: Hi, everyone. Welcome back to the Tech Law Talks podcast. This is continuing Reed Smith's AI series, and we're really excited to have you here today and for you to be with us. The topic today, obviously, AI and the use of AI is surging ahead. I think we're all kind of waiting for that regulatory shoe to drop, right? We're waiting for when it's going to come out to give us some guardrails or some rules around AI. And I think everyone knows that this is going to happen whether businesses want it to or not. It's inevitable that we're going to get some more rules and regulations here. Today, we're going to talk about what I see as truly the first or one of the first ones of those. That's the Colorado AI Act. It's really the first comprehensive AI law in the United States. So there's been some kind of one-off things and things that are targeted to more privacy, but they might have implications for AI. The Colorado AI Act is really the first comprehensive law in the United States that directly targets AI and generative AI and is specific for those uses, right? The other reason why I think this is really important is because Abigail and I were talking, we see this as really similar to what happened with privacy for the folks that are familiar with that. And this is something where privacy a few years back, it was very known that this is something that needed some regulations that needed to be addressed in the United States. After an absence of any kind of federal rulemaking on that, California came out with their CCPA and did a state-specific rule, which has now led to an explosion of state-specific privacy laws. I personally think that that's what we could see with AI laws as well, is that, hey, Colorado is the first mover here, but a lot of other states will have specific AI laws in this model. There are some similarities, but some key differences to things like the EU AI Act and some of the AI frameworks. So if you're familiar with that, we're going to talk about some of the similarities and differences there as we go through it. And kind of the biggest takeaway, which you will be hearing throughout the podcast, which I wanted to leave you with right up at the start, is that you should be thinking about compliance for this right now. This is something that as you hear about the dates, you might know that we've got some runway, it's a little bit away. But really, it's incredibly complex and you need to think about it right now and please start thinking about it. So as for introductions, I'll start with myself. My name is Tyler Thompson. I'm a partner at the law firm of Reed Smith in the Emerging Technologies Practice. This is what my practice is about. It's AI, privacy, tech, data, basically any nerd type of law, that's me. And I'll pass it over to Abigail to introduce herself. Abigail: Thanks, Tyler. My name is Abigail Walker. I'm an associate at Reed Smith, and my practice focuses on all things related to data privacy compliance. But one of my key interests in data privacy is where it intersects with other areas of the law. So naturally, watching the Colorado AI Act go through the legislative process last year was a big pet project of mine. And now it's becoming a significant part of my practice and probably will be in the future. Tyler: So the Colorado AI Act was passed at the very end of the 2024 legislative session. And it's largely intended to prevent algorithmic discrimination. And if you're asking yourself, well, what does that mean? What is algorithmic discrimination? In some sense, that is the million-dollar question, but we're going to be talking about that in a little bit of detail as we go through this podcast. So stay tuned and we'll go into that in more detail. Abigail: So Tyler, this is a very comprehensive law and I doubt we'll be able to cover everything today, but I think maybe we should start with the basics. When is this law effective and who's enforcing it and how is it being enforced? So the date that you need to remember is February 1st of 2026. So there is some runway here, but like I said at the start, even though we have a little bit of runway, there's a lot of complexity and I think it's something that you should start now. As far as enforcement, it's the Colorado AG. The Colorado Attorney General is going to be tasked with enforcement here. A bit of good news is that there's no private right of action. So the Colorado AG has to bring the enforcement action themselves. You are not under risk of being sued for the Colorado Privacy Act from an individual plaintiff. Maybe the bad news here is that violating the Colorado AI Act will be considered an unfair and deceptive trade practice under Colorado law. So the trade practice regulation, that's something that exists in Colorado law like it does in a variety of state laws. And a violation of the Colorado AI Act can be a violation of that as well. And so that just really brings the AI Act into some of this overarching rules and regulations around deceptive trade practices. And that really increases the potential liability, your potential for damages. And I think also just from a perception point, it puts the Colorado AI Act violation in some of these kind of consumer harm violations, which tend to just have a very bad perception, obviously, to your average state consumer. The law also gives the Attorney General a lot of power in terms of being able to ask covered entities for certain documentation. We're going to talk about that as we get into the podcast here. But the AG also has the option to issue regulations that further specify some of the requirements of this law. That's the thing that we're really looking forward to is additional regulations here. As we go through the podcast today, you're going to realize there seems like there's a lot of gray area. And you'd be right, there is a lot of gray area. And that's what we're hoping some of the regulations will come out and try to reduce that amount of uncertainty as we move forward. Abigail, can you tell us who does the law apply to and who needs to have their ducks in a row for the AGE by the time we hit next February? Abigail: Yeah. So unlike Colorado's privacy law, which has like a pretty large like processing threshold that entities have to reach to be covered, this law applies to anyone doing business in Colorado that develops or deploys a high-risk AI system. Tyler: Well, that high-risk AI system sentence, it feels like you used a lot of words there that have a real legal significance. Abigail: Oh, yes. This law has a ton of definitions, and they do a lot of work. I'll start with a developer. A developer, you can think of just as the word implies. They are entities that are either building these systems or substantially modifying them. And then deployers are the other key players in this law. Deployers are entities that deploy these systems. So what does deploy actually mean? The law defines deploy as to use. So basically, it's pretty broad. Tyler: Yeah, that's quite broad. Not the most helpful definition I've heard. So if you're using a high-risk AI system and you do business in Colorado, basically you're a deployer. Abigail: Yes. And I will emphasize the fact that it only applies to most of the requirements of the law. Only apply to high-risk AI systems. And I can get into what that means. High-risk, for the purpose of this law, refers to any AI system that makes or is a substantial factor in making a consequential decision. Tyler: What is a consequential decision? Abigail: They are decisions that produce legal or substantially similar effects. Tyler: Substantially similar. Abigail: Yeah. Basically, as I'm sure you're wondering, what does substantially similar mean? We're going to have to see how that plays out when enforcement starts. But I can get into what the law considers to be legal effects, and I think this might highlight or shed some light on what substantially similar means. The law kind of outlines scenarios that are considered consequential. These include education enrollment, educational opportunities, employment or employment opportunities, financial or lending service, essential government services, health care services, housing, insurance, and legal services. Tyler: So we've already gone through a lot. So I think this might be a good time to just pause and put this into perspective, maybe give an example. So let's say your recruiting department or your HR department uses, aka deploys an AI tool to scan job applications or job application cover letters for certain keywords. And those applicants that don't use those keywords get put in the no pile or, hey, this cover letter, it's not talking about what we want to talk about, but we're going to reject them. They're going to go on the no pile of resumes. What do you think about that, Abigail? Abigail: I see that as kind of falling into that employment opportunity category that the law identifies. And I feel like that's kind of almost like falling into that substantially similar thing when it comes to substantially similar to legal effects. I think that use would be covered in this situation. Tyler: Yeah, a lot of uncertainty here, but I think we're all guessing until enforcement really starts or until we get more help from the regulations. Maybe now's the time, Abigail, do you want to give them some relief? Talk about some of the exceptions here. Abigail: Yeah, I mean, we can, but the exceptions are narrow. Basically, as far as developers are concerned, I don't think they're getting out of the act. If your business develops a high-risk AI system and you do business in the state, you're going to comply with it. Tyler: Oh. Abigail: Yeah, or face enforcement. The law does try to prevent deployers from accidentally becoming developers, and that's a nuanced thing. Tyler: So I guess that's interesting. What do you mean by that, that it tries to prevent them from becoming developers, and how does it do that? Abigail: So if you recall when I was talking about what a developer is, you can fall into the developer category if you modify a high-risk AI system. You don't have to be the one that actually creates it from the start, but if you substantially modify a system, you're a developer at that point. But what the law tries to do is make it so that if you're a deployer and your business deploys one of these systems and the system continues to learn based off of your deployment, and then that learning changes the system, you don't become a developer as a result. But, and like, this is a big but, that chain of events, the system modifying itself based off of training on your data or your use of the system, that has to be an anticipated change that you found out about through an impact assessment. So, and it also has to be technically documented. I'll give a crude hypothetical for this, just like a simple one to kind of help you wrap your mind around what I'm talking about here. Let's say I have a donut business and I start using Bakery Corporation's AI system. And then that system starts to become an expert in donuts as a result of my using it. It can't be a happy accident. I have to anticipate that or else my business becomes a developer. Tyler: Yeah. Donuts are high risk in this scenario, right? Abigail: Well, donuts are always a consequential decision, Tyler. Tyler: That's fair. Abigail: But there's more. Deployers have a little bit of a small business exception. And I think that this is going to end up really helping a lot of companies out. Basically, you will meet this exception if you employ fewer than 50 full-time equivalent people, if you don't use your own data to train the high-risk AI system. And if you use the system as intended by the developer and make available to your consumers similar information as to what would go into an impact assessment, then you get out of some of the law's more draconian requirements, such as the public notice requirement, impact assessments, and the big compliance program that the law requires that we'll get into later. Tyler: Okay, wait. So if my donut business is already providing consumers with some of the similar information that would have been in the impact assessment, I don't actually have to conduct a full impact assessment then? Abigail: Yes. Tyler: But wouldn't I have to basically do the impact assessment anyway to know what the similar information is? Like, how can I provide similar information without knowing what would have been in the impact assessment? Abigail: Yes and no. You have to do it, but you don't. And I think this is another spot where the definitions are kind of doing a lot of work here. I think that what the law is trying to do with this exception is trying to not force small businesses to have these robust, expensive compliance programs that you and I know are a heavy lift, while still kind of making them carefully consider the consequences of using a high-risk AI system. I think that's the balance that's trying to be struck here is, you know, we understand that compliance programs, especially the one that this law dictates, are very expensive and cumbersome and can sometimes require whole compliance departments. But we also still don't want to let small businesses employ high-risk AI systems in a way that's not carefully considered and could potentially result in algorithmic discrimination. Tyler: Okay, that makes sense. So maybe a small business would be using the requirements of an impact assessment, not actually doing one, but using the requirements as a guide for how they should go about using the AI system. So they don't actually have to do the assessment, but just looking at the requirements provides a helpful guide. Abigail: Yeah, I think that's the case. And we'll get into this more later when we talk about some of the enforcement mechanisms, but they also wouldn't have to provide the attorney general with an impact assessment. That's part of the enforcement aspect. Tyler: So wait a second. I think we've been positive for probably almost a minute or two. So I think it's time for maybe the other shoe to drop, right? So you said that this only exempts small businesses from a number of requirements. I think they still have to tell customers if a high-risk system was used to make a decision about them though. Is that right? Abigail: Yes, that's right. Tyler: Okay, interesting. So is there any other not very relieving relief that you want to share with us? Abigail: Yes. So I also want to circle back on the high risk thing. Like for example, the law does explicitly say that AI systems that consumers talk to for informational purposes, you know, like if I go to one of these language models and I say, write an email to my boss asking for a last minute vacation, these are not high risk as long as they have an accepted use agreement, like a terms of use. Tyler: Okay, so that's interesting. I think I see what the act is getting at there. So if I ask an AI model with a terms of use to write that vacation email, then it results in my resignation, probably because I didn't read it before sending it, then that's out of scope. Abigail: Yes. And one last thing, if I may. Tyler: Of course you may. Abigail: Yes. I want to make sure that this is clear. All consumer facing AI systems, high risk or not, have to disclose to the consumer that they are using or talking to AI. There is a funny little exception here. The law includes an obviousness exception, but I would not counsel anyone to rely on that. And I'm sure, Tyler, you've seen people on social media fall for those AI-generated videos where they have like 12 fingers and there's like phrases, but they're not using real letters. I think obvious is too subjective to rely on that exception. Tyler: Yeah, I agree. And of course, I would never fall for one of those and certainly have not numerous times. So good to know. So let's switch gears. Let's talk a little bit about internal compliance requirements. We've spent a lot of time talking about the who and the what that the Colorado AI Act applies to. I think now we shift gears and we talk about what does the Colorado AI Act actually require. And I guess, Abigail, do you want to start by telling us what developers have to do? Abigail: Yeah. So first and foremost, I will say both developers and deployers have an affirmative responsibility to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. And I'm sure, Tyler, that you're probably prickling at the reasonably foreseeable aspect of that. Tyler: Yeah, I think in general, right, the regulator always has a different idea of what's reasonably foreseeable than, you know, a business in this space that's actually operating in this area, right? You know, I would say that the business could be the true expert on AI and what is algorithmic discrimination, but reasonable minds can disagree about what's reasonable and what's reasonably foreseeable. And so I do think that's tricky there. And while it might seem like a gray area that's helpful, I think it's just a gray area that adds risk. Abigail: Yeah. And now getting kind of more into what developers have to do, I'm about to bomb you with a laundry list of requirements. So if this is overwhelming, don't worry, you're not alone. But one of the main aspects of the requirements for developers is that they have to provide the deployers. So remember the people that are using the developer's high-risk AI system. They have to provide them with tons of information. They have to give them a statement describing reasonably foreseeable uses and known harmful or inappropriate uses. They have to document the data used to train the system, the reasonably foreseeable limitations of the system, purpose, intended benefits, and uses, and all other information a deployer would need to meet their obligations. They also have to document how it was evaluated for performance and mitigation of algorithmic discrimination, data governance measures for how they figured out which data sets were the right ones to train the model. The intended outputs of it, and also how the system should and should not be used, including being monitored by an individual. It's really tracing this model from inception to deployment. Tyler: Wow. Woof. Well, I want to get into some of this intended uses versus reasonably foreseeable uses thing. Talk about that for a minute. I think a key point here will be trying to address some of these things in the contract, right? You know, Abigail, you and I have talked a lot about artificial intelligence addendums, artificial intelligence agreements that you can attach to kind of a master agreement. I think something like that, that gives us some certainty and something reasonable in a contract might be key here, but I'm interested to hear your thoughts. Abigail: Yeah, I agree with you, Tyler. I think that this intended uses thing, it's interesting that the law also requires developers to also identify what they think are not intended uses, but possible uses. And here I'm thinking that a developer probably in their AI addendum is probably going to want to put stuff in there, especially tying to like indemnification, kind of saying, hey, Deployer, if you use this in a way that we did not intend, you need to hold us harmless for any downflow effects of that. I think that's going to be key for developers here. Tyler: Yeah, I'm with you. I think the contract is just so crucial and just have to have that in my mind moving forward to do this the right way. You talk about the deployers. Dare I ask about the deployers and what the act requires there? Abigail: Yep. And here I think our listeners are going to really see why the small business exception is a big deal. So, deployers are required to implement a risk management policy and program, and the law does not leave anything here to chance. To quote it, the program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk AI system. Basically, it's not enough just to paper up as a deployer. You have to be papering up and then following the plan that you set up for yourself. Tyler: Yeah, interesting. And I wonder if the regulators kind of saw what happened with even privacy, right? Where there was a lot of put a policy in place, let's paper this. But on a monthly or daily or yearly basis, whatever your life cycle is, you're not actually doing a lot with it. So interesting that they have made that so robust with those requirements. And it seems like that this is where that small business exception must be pretty important, right? Abigail: Absolutely, yes. Because as you and I know, this can get pretty expensive and it can take up a lot of man hours as well. The program also has to be based off of a nationally or internationally recognized standard, kind of like we see when NIST publishes guidance. It has to consider things like the characteristics of the deployer and also the nature, scope, and intended uses of the system. And it also has to consider the sensitivity and volume of the data process. Like I said, nothing's left up to chance here. And that's not all. This is another big compliance requirement. Tyler, do you want to give an overview of what the impact assessments have to look like? We've seen these in data privacy before. Tyler: Yeah, for sure. Happy to. And I know it just seems like a lot because it is a lot, but hopefully the impact assessment is something that you're at least a little bit familiar with because, as Abigail said, we've seen that in privacy. There's other compliance areas where an impact assessment or a risk assessment is important. In my mind, it does follow some of what we saw in privacy where your very high level, your 30,000-foot view is we're talking about what the AI system is doing. We're going to point out some risk there, and then we're going to point out some controls for that risk. But to get into some of the specifics, let's talk about what's actually required and some of the specifics here. The first is a statement describing the purpose, intended uses, and benefits of the AI system. You also need an analysis of whether it poses any known or reasonably foreseeable risks of algorithmic discrimination and steps that mitigate that risk. You need a description of the categories of data used as inputs and the outputs the system produces, any metrics used to evaluate the system's performance, and a description of transparency measures and post-deployment monitoring in guardrails. And finally, finally, finally, if the deployer is going to customize that AI system, an overview of the categories of data that were used in that customization. Abigail: Yeah. So I'm starting to see a lot of privacy themes here, especially with descriptions of data categories. But when do deployers have to do these impact assessments? Tyler: The shorter answer is it's ongoing, right? It's an ongoing obligation. They have to do it within 90 days of the deployment or modification of a high-risk AI system. And then after that, at least annually. So you have that short 90-day turnaround up front. And keep in mind that's deployment or modification of the system. And then you have the annual requirement. So this really is an ongoing thing that you're going to need to stay on top of and have your team stay on top of. Also worth noting that modifications to the system trigger an additional requirement in which deploys have to describe whether or not the system was used within the developers intended use. But that's kind of a tricky thing there, right? A little bit, you might have to step into the developer's shoes and think about, was this their intended use? Especially if the developer didn't provide really good documentation there, and that's not something you got during the process of signing up with them for the AI platform. Abigail: Yeah, and I think this highlights, again, how the intended use thing is going to play a big role in contracting. I think there's also a record retention requirement with these impact assessments, right? Tyler: Yeah, there is. I mean, 2025, I think, is going to be the year of record retention. Deployers have to retain all impact assessments, so all your past impact assessments, each time you conduct one for your annual review. So at least three years following the final deployment. So that's important to think about, too. I mean, something that we saw with privacy is, A, it's an updated impact assessment, and some of those old impact assessments would just be gone, be removed. Maybe they were a draft assessment that was never actually finalized. Now it kind of makes it clear that every time you have an impact assessment that satisfies one of these requirements, that hits the timeframe, we need to have an impact assessment, let's say, within that 90 days. Now that's an impact assessment that you have to save for a minimum of three years following that deployment. Also, if you recall, deploy has a really, really broad definition of just to use. So really, it's three years from the last time the system gets used. I think that can be incredibly tricky. and certainly it's a couple years down the road, but that can be an incredibly tricky thing, right? If you have an AI system that is kind of dormant or maybe it's used once a year for a compliance function, something like that, every time it's touched, that's going to re-trigger that use definition and then you will have deployed it again and now you have to do another three-year period from that last deployment or use. Abigail: Wow, yeah. Thinking like you're going to have some serious admin controls on when you put a high-risk AI system to bed. I think, too, there's also some data privacy-esque requirements involved with these. Do you want to go over that really quick? Tyler: Sure, yeah. I mean, these are some of the transparency things that, again, like you said, Abigail, folks might be kind of used to doing some of these transparency-type requirements from the privacy side. The Colorado AI Act has these requirements, too. So first, notification, opt-out, and appeal. So remember, we're talking about AI systems that are helping to make or actually making consequential decisions. In that case, the law requires the deployer to notify the consumer of the nature and consequences of the decision before it's made. So before that can actually, the AI system can make a decision or help, the consumer has to be notified. I have to tell the consumer how to contact the deployer. This might seem easy, but as we've seen with privacy, you might have a whole different contact set of information for something AI related than like your general customer service line, for example. If applicable, you have to tell the consumer how to opt out of their personal data being used to profile that consumer in a way that produces that legal effect. So that's similar to what we've seen in Colorado privacy law and other state comprehensive privacy laws in the United States. And then finally, if a decision is adverse to the consumer, provide specific information of how that decision was reached, along with opportunities to correct any wrong data that produced the adverse decision in a way to appeal the decision. So that's a big deal there. I mean, providing that information on how that decision was reached, I think that requirement alone might be enough to cause some businesses to say, we don't want to go down this road. We don't want to provide it because we don't want to have to provide this information on why an adverse decision was reached. Abigail: Yeah, I would agree with that. And I want to reemphasize here that small businesses are not exempt from this part of the law, even if they're exempt from the other stuff. Tyler: Yeah, sadly, that's correct. And most importantly, deploys have to make sure this information, it's the consumer, which can be tricky, of course, right? And then even if not interacting with the consumer directly, you got to figure out a way that the consumer can actually have this information. And then it has to be in plain language and accessible. So I view this as another spot that a contract can come into play because there's going to be maybe some real requirements here that you're not going to be able to handle. You might have to use that contract to make sure that you have the information that you need and to maybe obligate other parties to provide some of this information to the consumer, depending on your relationship. Abigail: Yeah, absolutely. Should we really quick talk about the notice provisions? Tyler: Yeah, I think that'd be great. Abigail: Okay, so real quick, both deployers and developers are going to have to have some sort of public facing policy. I really think that this is going to become a commonplace thing, an AI policy, kind of like we have privacy policies now. And some themes for these policies are going to be describing your currently deployed systems, how you're managing known or reasonably foreseeable risk, insights about the information collected and processed. And the other thing is that there's an accuracy requirement here. If you change any of those things on the back end, you need to update your AI policy within 90 days. Tyler: And I know we're kind of glossing over this a bit, but I feel like this is kind of a trap, right? We've seen this before where a business can get dinged because their privacy policy or something didn't accurately reflect their data practices. I think this is similar, right? Where maybe, arguably, they would have been better off by not saying anything. Abigail: Yeah, of course. I think this aspect kind of opens companies up to some FTC risk. They're going to have to stay on top of this, not only to comply with Colorado law, but also to avoid federal regulatory scrutiny and kind of the same unfair and deceptive trade practices arena. Tyler: Well, I know we're getting to the end here, but I think we've got to quickly talk about how much insight the act entitles the AG to. And then maybe, Abigail, just go on and talk about some of the attorney-client privilege, that weirdness that we have as well. Abigail: Yeah. So I think this is kind of like where the law gets really scary for companies is it enables the AG to ask developers for all of that information that they have to provide deployers that we went over quickly earlier in the podcast. And then the AG also gets to ask deployers for their risk management policy and program, their impact assessments, and all the records that assist the impact assessment. Tyler: Yeah. And then do you want to talk about some of the weird no waiver of attorney-client privilege piece? I think that's really strange. Abigail: Yeah. So we've seen this with the privacy laws as well, because I think that if I'm remembering correctly, the AG gets to ask for those impact assessments as well. And it has this provision that says having to provide the impact assessment doesn't waive attorney-client privilege when you comply with it, which is, I think, interesting because then the AG has now seen your information, but they're not allowed to use it against you. I don't know how that's going to work in terms of enforcement. Tyler: Yeah, that's pretty strange, right? And there is a 90-day deadline in responding to these requests, so it's kind of a short deadline. Abigail: Yeah. And then finally, the last kind of, like I said, scary AG notification requirement that I really want to point out is that there's a mandatory reporting requirement if a developer or employer discovers that a high-risk AI system has had algorithmic discrimination, then they have to alert the AG. There is an affirmative defense if they rectify the issue, but you only get the affirmative defense if you have those NIST-like frameworks in place. And also, I want to point out too that the law does not require deployers to tell their developers that they found algorithmic discrimination. They just have to tell the AG. So I think this is another issue if you're developer side in contracting that you need to insist that your deployers are also alerting you to this kind of issue. Tyler: Yeah, right. Otherwise, you know, your deployer is going to tell the AG that, hey, that developer's product is discriminating. You might never even know that it happened. You'll have an investigation maybe pending against you and you had no idea that it was even going to happen. Abigail: Exactly. So since we're going to, let's wrap up here, Tyler, I want to reemphasize, I think you talked about this in the beginning. If this goes into effect in a year, why are we talking about it today? Tyler: Look, I think, you know, from this conversation, it's obvious, right? There is a lot a lot here. This is going to be a big project for any business that it covers. I think there's also even threshold projects of determining, hey, is this something that is going to apply to us? And that's going to be big as well. You know, as I've seen in my years doing data privacy, there's probably going to be a little bit of an initial lag in enforcement. So I don't expect, hey, once we hit February 2026, a bunch of enforcement actions. But you could be wrong. And those enforcement actions, when they do come, are going to come seemingly out of nowhere. And they're going to be backwards looking until the effective date of the law. So you really don't want to be caught off guard here. There's a lot to do. Abigail: Yeah, I think that's especially true considering how much documentation is involved. I feel like this law really implicates a lot of business planning and decision making. So you kind of need to have your business side of your team kind of really thinking about whether these systems are worth it in the end. Tyler: Yeah. When you think about the compliance costs, the amount of oversight, you just have to be honest with yourself, I think, if you're a business, as to whether implementing a high-risk AI system is really worth it for you, at least in Colorado. And I think we're going to see it in other states as well. I think this is especially true for the business that just barely misses that deployer exception. And if you just barely miss that deployer exception, that can be tough, right? Because you might have the bigger compliance obligations. And so that's something you have to think about if you're in that gray area or or maybe some of the other gray areas in this law, think about whether it's really worth it. Well, we've covered a lot here. I think the bottom line is the risk here is real. There's real action items. There's real things you need to do. Please reach out to us. Abigail and I, as you can probably tell, we love nerding out about the subject. We'd be happy to talk to you high level and just help you brainstorm whether it applies to you and your company or not. Again, thanks so much for joining. Really appreciate your time and hope to see you on the next one. Abigail: Yeah, thank you, everyone. And thank you, Tyler. Tyler: Yeah, thanks, Abigail. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

1 Navigating the Digital Operational Resilience Act 15:17
15:17
Play Later
Play Later
Lists
Like
Liked15:17
Catherine Castaldo , Christian Leuthner and Asélle Ibraimova break down DORA, the Digital Operational Resilience Act, which is new legislation that aims to enhance the cybersecurity and resilience of the financial sector in the European Union. DORA sets out common standards and requirements for these entities so they can identify, prevent, mitigate and respond to cyber threats and incidents as well as ensure business continuity and operational resilience. The team discusses the implications of DORA and offers insights on applicability, obligations and potential liability for noncompliance. This episode was recorded on 17 January 2025. ----more---- Transcript : Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Catherine: Hi, everyone. I'm Catherine Castaldo, a partner in the New York office of Reed Smith, and I'm in the EmTech Group. And I'm here today with my colleagues, Christian and Asélle, who I'll introduce themselves. And we're going to talk to you about DORA. Go ahead, Christian. Christian: Hi, I'm Christian Leuthner. I'm a Reed Smith partner in the Frankfurt office, focusing on IT and data protection law. Asélle: And I'm Asélle Ibraimova. I am a council based in London. And I'm also part of the EmTech group, focusing on tech, data, and cybersecurity. Catherine: Great. Thanks, Asélle and Christian. Today, when we're recording this, January 17th, 2025, is the effective date of this new regulation, commonly referred to as DORA. For those less familiar, would you tell us what DORA stands for and who is subject to it? Christian: Yeah, sure. So DORA stands for the Digital Operational Resilience Act, which is a new regulation that aims to enhance the cybersecurity and resilience of the financial sector in the European Union. It applies to a wide range of financial entities, such as banks, insurance companies, investment firms, payment service providers, crypto asset service providers, and even to critical third-party providers that offer services to the financial sector. DORA sets out common standards and requirements for these entities to identify, prevent, mitigate, and respond to cyber threats and incidents as well, as to ensure business continuity and operational resilience. Catherine: Oh, that's comprehensive. Is there any entity who needs to be more concerned about it than others, or is it equally applicable to all of the ones you listed? Asélle: I can jump in here. So DORA is a piece of legislation that wants to respect proportionality and allow organizations to deal with DORA requirements that will be proportionate to their size, to the nature of the cybersecurity risks. So, for example, micro-enterprises or certain financial entities that have only a small number of members will have a simplified ICT risk management framework under DORA. I also wanted to mention that DORA applies to financial entities that are outside of the EU, but provide services in the EU so they will be caught. And maybe just to also add in terms of the risks. It's not only the size of the financial entities that matter in terms of how they comply with the requirements of DORA, but also the cybersecurity risk. So let's say an ICT third-party service provider, the risk of that entity will depend on the nature of that service, on the complexity, on whether that service supports critical or important function of the financial entity, generally dependence on ICT service provider and ultimately on its potential to disrupt the services of that financial entity. Catherine: So some of our friends might just be learning about this by listening to the podcast. So what does ICT stand for, Asélle? Asélle: It is informational communication technology. So in other words, it's anything that a financial entity receives as a service or a product digitally. It also covers ICT services provided by a financial entity. So, for example, if a financial entity offers a platform for fund or investment management or a piece of software or its custodian services are provided digitally, those services will also be considered an ICT service. And those financial entities will need to cover their customer-facing contracts as well and make sure DORA requirements are covered in the contracts. Catherine: Thank you for that. What are some of the risks for noncompliance? Christian: The risks for noncompliance with DORA are significant and could entail both financial and reputational consequences. First of all, DORA empowers the authorities to impose administrative sanctions and corrective measures on entities that breach its provisions. Which could range from warnings and reprimands to fines and penalties to withdrawals of authorization and licenses, which could have significant impact on the business of all the entities. The level of sanctions and measures will depend on the nature, gravity and duration of the breach, as well as on the entity's cooperation and remediation efforts. So better be positive to help the authority in case they identify the breach. Second, non-compliance with DORA could also expose entities to legal actions and claims from the customers, investors, or other parties that might suffer losses or damages as a result of cyber incident or disruption of service. And third, non-compliance with DORA could also damage the entity's reputation and trustworthiness in the market and affect its competitive advantage and customer loyalty. Therefore, entities should take DORA seriously and ensure that they comply with its requirements and expectations. Catherine: If I haven't been able to start considering DORA, and I think it might be applicable to me, where should I start? Asélle: It's actually a very interesting question. So from our experience. We see large financial entities such as banks, etc. Look at this comprehensively. Comprehensively, obviously, all financial entities had quite a long time to prepare, but large organizations seem to look at it more comprehensively and have done the proper assessment of whether or not their services are caught. But we are still getting quite a few questions in terms of whether or not DORA applies to a certain financial entity type. So I think there are quite a few organizations out there who are still trying to determine that. But once that's clear although DORA itself is quite a long kind of piece of legislation, in actual fact, it is further clarified in various regulatory technical standards and implementing technical standards, and they clarify all of the cybersecurity requirements that actually appear quite generic in DORA itself. So those RTS and ITS are quite lengthy documents and are all together around 1,000 pages. So that's where kind of the devil is in the detail there and organizations will find it may appear quite overwhelming. So I would start by assessing whether DORA applies, which services, which entities, which geographies. Once that's determined, it's important to identify whether financial entities' own services may be deemed ICT services, as I just explained earlier. The next step in my mind would be to check whether the services that are caught also support critical or important functions, and also when kind of making registries of third party ICT service providers, also making sure, kind of identifying those separately. And the reason is quite a few of the requirements, additional requirements applied to critical and important functions. For example, the incident reporting obligations and requirements in terms of contractual agreements. And then I would look at updating contracts, first of all, with important ICT service providers, then also checking if customer-facing contracts need to be updated if the financial entity is providing ICT services itself. And also not forgetting the intra-group ICT agreements where, for example, a parent company is providing data storage or word processing services to its affiliates in Europe. So they should be covered as well. Catherine: If we were a smaller company or a company that interacts in the financial services sector, can we think of an example that might be helpful for people listening on how I could start? Maybe what's the example of a smaller or middle-sized company that would be subject to this? And then who would they be interacting with on the ICT side? Asélle: Maybe an example of that could be an investment fund or a pensions provider. I think most of this compliance effort when it comes to DORA will be driven by in-house cybersecurity teams. So they will be updating their risk management and risk frameworks. But any updates to policies, whenever they have to be looked at, I think will need to be reviewed by legal and incident reporting policies, contract management policies, I don't think they depend on size. If there are ICT service providers supporting critical or important functions, additional requirements will apply regardless of whether you're a small or a large organization. It's just the measures will depend on what level of risk, say, certain ICT service provider presents. So if this internal cybersecurity team has kind of put, you know, all the risk, all the IST assets in buckets and all the third-party IST services in various buckets based on criticality, then that would make the job of legal and generally compliance much easier. However, what we're seeing right now is that all of that work is happening all at the same time in parallel as people are rushing to get compliance. So that will mean that there may be gaps and inconsistencies and I'm sure they can be patched later. Catherine: Thank you for that. So just another follow-up question, maybe Christian can respond, would my data center contract be subject to DORA regulations if I was a financial services entity? Christian: It's worth to look into that and see if it's an ICT provider that you use to provide your services. So I'm pretty sure you need to look into that and see if you can implement at least the contractual requirements that arise from DORA. Asélle: I would just add to support Christian's response and say that the definition of ICT services is quite broad and covers digital and data services provided through ICT systems. So, I mean, as you can see, it's just so generic and I'm pretty sure it would cover data centers, but I guess not directly because say a financial entity was receiving a service of a cloud service provider, then data centers are probably a second or third kind of level subcontractor. And unfortunately, or fortunately, DORA has very detailed requirements in terms of subcontracting and the obligations don't stop at a certain level. Therefore, data centers are likely to be caught somehow and will be receiving DORA addenda to their contracts. Catherine: Thank you for that clarification. I was, like probably many people have tried to digest this regulation, a little confused on how broad the coverage for information and communication technology went. But that's very helpful then, I'm sure. Any final thoughts? Asélle: We are helping a few organizations and learning a lot as we work with them. And the legislation is pretty complex and requires in-house teams to work together as well. And Christian and I would be very happy to assist and navigate this complex framework. Christian: And if you haven't started yet, of course, it's a huge regulation. There's so many requirements to tackle, but there's one day you have to start. So then start today, look into it, and implement the requirements that arise from DORA. Catherine: Well, thank you so much, Christian and Asélle, and everybody, as we said before, we're talking about DORA today, because today, January 17th, is the day that it becomes effective. So if, like Christian said, you haven't started, today's a good day to start. And I'm sure you can reach out to one of my colleagues to get some assistance. Thanks for joining. Christian: Thanks for having us, Catherine. Asélle: It was a pleasure. Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

1 EU/Germany: Damages after data breach/scraping – Groundbreaking case law 20:15
20:15
Play Later
Play Later
Lists
Like
Liked20:15
In its first leading judgment (decision of November 18, 2024, docket no.: VI ZR 10/24) , the German Federal Court of Justice (BGH) dealt with claims for non-material damages pursuant to Art. 82 GDPR following a scraping incident. According to the BGH, a proven loss of control or well-founded fear of misuse of the scraped data by third parties is sufficient to establish non-material damage. The BGH therefore bases its interpretation of the concept of damages on the case law of the CJEU, but does not provide a clear definition and leaves many questions unanswered. Our German data litigation lawyers, Andy Splittgerber , Hannah von Wickede and Johannes Berchtold , discuss this judgment and offer insights for organizations and platforms on what to expect in the future. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Andy: Hello, everyone, and welcome to today's episode of our Reed Smith Tech Law Talks podcast. In today's episode, we'll discuss the recent decision of the German Federal Court of Justice, the FCJ, of November 18, 2024, on compensation payments following a data breach or data scraping. My name is Andy Splittgerber. I'm partner at Reed Smith's Munich office in the Emerging Technologies Department. And I'm here today with Hannah von Wickede from our Frankfurt office. Hannah is also a specialist in data protection and data litigation. And Johannes Berchtold, also from Reed Smith in the Munich office, also from the emerging technologies team and tech litigator. Thanks for taking the time and diving a bit into this breathtaking case law. Just to catch everyone up and bring everyone on the same speed, it was a case decided by the German highest civil court, in an action brought by a user of a social platform who wanted damages after his personal data was scraped by a hacker from that social media network. And that was done through using the telephone number or trying out any kind of numbers through a technical fault probably, and this find a friend function. And through this way, the hackers could download a couple of million data sets from users of that platform, which then could be found in the dark web. And the user then started an action before the civil court claiming for damages. And this case was then referred to the highest court in Germany because of the legal difficulties. Hannah, do you want to briefly summarize the main legal findings and outcomes of this decision? Hannah: Yes, Andy. So, the FCJ made three important statements, basically. First of all, the FCJ provided its own definition of what a non-material damage under Article 82 GDPR is. They are saying that mere loss of control can constitute a non-material damage under Article 82 GDPR. And if such a loss of the plaintiffs is not verifiable, that also justified fear of personal data being misused can constitute a non-material damage under GDPR. So both is pretty much in line with what the ECJ already has said about non-material damages in the past. And besides that, the FCJ makes also a statement regarding the amount of compensation for non-material damages following from scraping incident. And this is quite interesting because according to the FCJ, the amount of the claim for damages in such cases is around 100 euros. That is not much money. However, FCJ also says both loss of control and reasonable apprehension, also including the negative consequences, must first be proven by the plaintiff. Andy: So we have an immaterial damage that's important for everyone to know. And the legal basis for the damage claim is Article 82 of the General Data Protection Regulation. So it's not German law, it's European law. And as you'd mentioned, Hannah, there was some ECJ case law in the past on similar cases. Johannes, can you give us a brief summary on what these rulings were about? And on your view, does the FCJ bring new aspects to these cases? Or is it very much in line with the European Court of Justice that already? Johannes: Yes, the FCJ has quoted ECJ quite broadly here. So there was a little clarification in this regard. So far, it's been unclear whether the loss of control itself constitutes the damage or whether the loss of control is a mere negative consequence that may constitute non-material damage. So now the Federal Court of Justice ruled that the mere loss of control constitutes the direct damage. So there's no need for any particular fear or anxiety to be present for a claim to exist. Andy: Okay, so it's not. So we read a bit in the press after the decision. Yes, it's very new and interesting judgment, but it's not revolutionary. It stays very close to what the European Court of Justice said already. The loss of control, I still struggle with. I mean, even if it's an immaterial damage, it's a bit difficult to grasp. And I would have hoped FCJ provides some more clarity or guidance on what they mean, because this is the central aspect, the loss of control. Johannes, you have some more details? What does the court say or how can we interpret that? Johannes: Yeah, Andy, I totally agree. So in the future, discussion will most likely tend to focus on what actually constitutes a loss of control. So the FCJ does not provide any guidance here. However, it can already be said the plaintiff must have had the control over his data to actually lose it. So whether this is the case is particularly questionable if the actual scrape data was public, like in a lot of cases where we have in Germany right here, and or if the data was already included in other leaks, or the plaintiff published the data on another platform, maybe on his website or another social network where the data was freely accessible. So in the end, it will probably depend on the individual case if there was actually a loss of control or not. And we'll just have to wait on more judgments in Germany or in Europe to define loss of control in more detail. Andy: Yeah, I think that's also a very important aspect of this case that was decided here, that the major cornerstones of the claim were established, they were proven. So it was undisputed that the claimant was a user of the network. It was undisputed that the scraping took place. It was undisputed that the user's data was affected part of the scraping. And then also the user's data was found in the dark web. So we have, in this case, when I say undistributed, it means that the parties did not dispute about it and the court could base their legal reasoning on these facts. In a lot of cases that we see in practice, these cornerstones are not established. They're very often disputed. Often you perhaps you don't even know that the claimant is user of that network. There's always dispute or often dispute around whether or not a scraping or a data breach took place or not. It's also not always the case that data is found in the dark web. I think this, even if the finding in the dark web, for example, is not like a written criteria of the loss of control. I think it definitely is an aspect for the courts to say, yes, there was loss of control because we see that the data was uncontrolled in the dark web. So, and that's a point, I don't know if any of you have views on this, also from the technical side. I mean, how easy and how often do we see that, you know, there is like a tag that it says, okay, the data in the dark web is from this social platform? Often, users are affected by multiple data breaches or scrapings, and then it's not possible to make this causal link between one specific scraping or data breach and then data being found somewhere in the web. Do you think, Hannah or Johannes, that this could be an important aspect in the future when courts determine the loss of control, that they also look into, you know, was there actually, you know, a loss of control? Hannah: I would say yes, because it was already mentioned that the plaintiffs must first prove that there is a causal damage. And a lot of the plaintiffs are using various databases that list such alleged breaches, data breaches, and the plaintiffs always claim that this would indicate such a causal link. And of course, this is now a decisive point the courts have to handle, as it is a requirement. Before you get to the damage and before you can decide if there was a damage, if there was a loss of control, you have to prove if the plaintiff even was affected. And yeah, that's a challenge and not easy in practice because there's also a lot of case law already about these databases or on those databases that there might not be sufficient proof for the plaintiffs being affected by alleged data breaches or leaks. Andy: All right. So let's see what's happening also in other countries. I mean, the Article 82, as I said in the beginning, is a European piece of law. So other countries in Europe will have to deal with the same topics. We cannot come up with our German requirements or interpretation of immaterial damages that are rather narrow, I would say. So Hannah, any other indications you see from the European angle that we need to have in mind? Hannah: Yes, you're right. And yet first it is important that this concept of immaterial damage is EU law, is in accordance with EU law, as this is GDPR. And as Johannes said, the ECJ has always interpreted this damage very broadly. And does also not consider a threshold to be necessary. And I agree with you that it is difficult to set such low requirements for the concept of damage and at the same time not demand materiality or a threshold. And in my opinion, the Federal Court of Justice should perhaps have made a submission here to the ECJ after all because it is not clear what loss of control is. And then without a material threshold, this contributes a lot to legal insecurity for a lot of companies. Andy: Yeah. Thank you very much, Hannah. So yes, the first takeaway for us definitely is loss of control. That's a major aspect of the decision. Other aspects, other interesting sentences or thoughts we see in the FCJ decision. And one aspect I see or I saw is right at the beginning where the FCJ merges together two events. The scraping and then a noncompliance with data access requests. And that was based in that case on contract, but similar on Article 15, GDPR. So those three events are kind of like merged together as one event, which in my view doesn't make so much sense because they're separated from the event, from the dates, from the actions or non-actions, and also then from the damages from a non-compliance with an Article 15. I think it's much more difficult to argue with a damage loss of control than with a scraping or a data breach. That that's not a major aspect of the decision but I think it was an interesting finding. Any other aspects, Hannah or Johannes, that you saw in the decision worth mentioning here for our audience? Johannes: Yeah so I think discussion in Germany was really broadly so i think just just maybe two points have been neglected in the discussion so far. First, towards the ending of the reasoning, the court stated that data controllers are not obliged to provide information about unknown recipients. For example, like in scraping cases, controllers often do not know who the scrapers are. So there's no obligation for them to provide any names of scrapers they don't know. That clarification is really helpful in possible litigation. And on the other hand, it's somewhat lost in the discussion that the damages of the 100 euros only come into consideration if the phone number, the user ID, the first name, the last name, the gender, and the workplace are actually affected. So accordingly, if less data, maybe just an email address or a name, or less sensitive data was scraped, the claim for damages can or must even be significantly lower. Andy: All right. Thanks, Johannes. That's very interesting. So, not only the law of control aspect, but also other aspects in this decision that's worth mentioning and reading if you have the time. Now looking a bit into the future, what's happening next, Johannes? What are your thoughts? I mean, you're involved in some similar litigation as well, as so is Hannah, what do you expect, What's happening to those litigation cases in the future? Any changes? Will we still have law firms suing after social platforms or suing for consumers after social platforms? Or do we expect any changes in that? Johannes: Yeah, Andy, it's really interesting. In this mass GDPR litigation, you always have to consider the business side, not always just the legal side. So I think the ruling will likely put an end to the mass GDPR litigation as we know it in the past. Because so far, the plaintiffs have mostly appeared just with a legal expenses insurer. So the damages were up to like 5,000 euros and other claims have been asserted. So the value in dispute could be pushed to the edge. So it was like maybe around 20,000 euros in the end. But now it's clear that the potential damages in such scraping structures are more likely to be in the double-digit numbers, like, for example, 100 euros or even less. So as a result, the legal expenses insurers will no longer fund their claims for 5,000 euros. But at the same time, the vast majority of legal expenses insurers have agreed to a deductible of more than 100 euros. So the potential outcome and the risk of litigation are therefore disproportionate. And as a result, the plaintiffs will probably refrain from filing such lawsuits in the future. Andy: All right. So good news for all insurers in the audience or better watch out for requests for coverage of litigation and see if not the values in this cube are much too high. So we will probably see less of insurance coverage cases, but still, definitely, we expect the same amount or perhaps even more litigation because the number as such, even if it's only 100 euros, seems certainly attractive for users as a so-called low-hanging fruit. And Hannah, before we close our podcast today, again, looking into the future, what is your recommendation or your takeaways to platforms, internet sites, basically everyone, any organization handling data can be affected by data scraping or a data breach. So what is your recommendation or first thoughts? How can those organizations get ready or ideally even avoid such litigation? Hannah: So at first, Andy, it is very important to clarify that the FCJ judgment is ruled on a specific case in which non-public data was made available to the public as a result of a proven breach of data protection. And that is not the case in general. So you should avoid simply apply this decision to every other case like a template because if other requirements following from the GDPR are missing, the claims will still be unsuccessful. And second, of course, platforms companies have to consider what they publish about their security vulnerabilities and take the best possible precautions to ensure that data is not published on the dark web. And if necessary, companies can transfer the risk of publication to the user simply by adjusting their general terms and conditions. Andy: Thanks, Hannah. These are interesting aspects and I see a little bit of conflict between the breach notification obligations under Article 33, 34, and then the direction this caseload goes. That will also be very interesting to see. Thank you very much, Hannah and Johannes, for your contribution. That was a really interesting, great discussion. And thank you very much to our audience for listening in. This was today's episode of our EU Reed Smith Tech Law Talks podcast. We thank you very much for listening. Please leave feedback and comments in the comments fields or send us an email. We hope to welcome you soon to our next episode. Have a nice day. Thank you very much. Bye bye. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcast on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

1 AI explained: AI in the UK insurance market 13:12
13:12
Play Later
Play Later
Lists
Like
Liked13:12
Laura-May Scott and Emily McMahan navigate the intricate relationship between AI and professional liability insurance, offering valuable insights and practical advice for businesses in the AI era. Our hosts, both lawyers in Reed Smith’s Insurance Recovery Group in London, delve into AI’s transformative impact on the UK insurance market, focusing on professional liability insurance. AI is adding efficiency to tasks such as document review, legal research and due diligence, but who pays when AI fails? Laura-May and Emily share recommendations for businesses on integrating AI, including evaluating specific AI risks, maintaining human oversight and ensuring transparency. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Laura-May: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in the UK insurance market. I'm Laura-May Scott, a partner in our Insurance Recovery and Global Commercial Disputes group based here in our London office. Joining me today is Emily McMahan, a senior associate also in the Insurance Recovery and Global Commercial Disputes team from our London office. So diving right in, AI is transforming how we work and introducing new complexities in the provision of services. AI is undeniably reshaping professional services, and with that, the landscape of risk and liability. Specifically today, we're going to discuss how professional liability insurance is evolving to address AI-related risks, and what companies should be aware of as they incorporate AI into their operations and work product. Emily, can you start by giving our listeners a quick overview of professional liability insurance and how it intersects with this new AI-driven landscape? Emily: Thank you, Laura-May. So, professional liability insurance protects professionals, including solicitors, doctors, accountants, and consultants, for example, against claims brought by their clients in respect of alleged negligence or poor advice. This type of insurance helps professionals cover the legal costs of defending those claims, as well as any related damages or settlements associated with the claim. Before AI, professional liability insurance would protect professionals from traditional risks, like errors in judgment or omissions from advice. For example, if an accountant missed a filing deadline or a solicitor failed to supervise a junior lawyer, such that the firm provided incorrect advice on the law. However, as AI becomes increasingly utilized in professional services and in the delivery of services and advice to their clients, the traditional risks faced by these professionals is changing rapidly. This is because AI can significantly alter how services are delivered to clients. Indeed, it is also often the case that it is not readily apparent to the client that AI has been used in the delivery of some of these professional services. Laura-May: Thank you, Emily. I totally agree with that. Can you now please tell us how the landscape is changing? So how is AI being used in the various sectors to deliver services to clients? Emily: Well, in the legal sphere, AI is being used for tasks such as document review, legal research, and within the due diligence process. At first glance, this is quite impressive, as these are normally the most time-consuming aspects of a lawyer's work. So the fact that AI can assist with these tasks is really useful. Therefore, when it works well, it works really well and can save us a lot of time and costs. However, when the use of AI goes wrong, then it can cause real damage. For example, if it transpires that something has been missed in the due diligence process, or if the technology hallucinates or makes up results, then this can cause a significant problem. I know, for example, on the latter point in the US, there was a case where two New York lawyers were taken to court after using ChatGPT to write a legal brief that actually contained fake case citations. Furthermore, using AI poses a risk in the context of confidentiality, where personal data of clients is disclosed to the system or there's a data leak. So when it goes wrong, it can go really wrong. Laura-May: Yes, I can totally understand that. So basically, it all boils down to the question of who is responsible if AI gets something wrong? And I guess, will professional liability insurance be able to cover that? Emily: Yes, exactly. Does liability fall to the professionals who have been using the AI or the developers and providers of the AI? There's no clear-cut answer, but the client will probably no doubt look to the professional with whom they've contracted with and who owes them a duty of care, whether that be, for example, a law firm or an accountancy firm to cover any subsequent loss. In light of this, Laura-May, maybe you could tell our listeners what this means from an insurance perspective. Laura-May: Yes, it's an important question. So since many insurance policies were created before AI, they don't explicitly address AI related issues. For now, claims arising from AI are often managed on a case by case basis within the scope of existing policies, and it very much depends on the policy wording. For example, as UK law firms must obtain sufficient professional liability insurance to adequately cover its current and past services as mandated by its regulator, to the solicitor's regulatory authority, then it is likely that such policy will respond to claims where AI is used to perform and deliver services to clients and where a later claim for breach of duty arises in relation to that use of AI. Thus, a law firm's professional liability insurance could cover instances where AI is used to perform legal duties, giving rise to a claim from the client. And I think that's pretty similar for accountancy firms who are members of the Institute of Chartered accountants for England and Wales. So the risks associated with AI are likely to fall under the minimum terms and conditions for its required professional liability insurance, such that any claims brought against accountants for breach of duty in relation to the use of AI would be covered under the insurance policy. However, as time goes on, we can expect to see more specific terms addressing the use of AI in professional liability policies. Some policies might have that already, but I think as we go through the market, it will become more industry standard. And we recommend that businesses review their professional liability policy language to ascertain how it addresses AI risk. Emily: Thanks, Laura-May. That's really interesting that such a broad approach is being followed. I was wondering whether you would be able to tell our listeners how you think they should be reacting to this approach and preparing for any future developments. Laura-May: I would say the first step is that businesses should evaluate how AI is being integrated into their services. It starts with understanding the specific risks associated with the AI technologies that they are using and thinking through the possible consequences if something goes wrong with the AI product that's being utilised. The second thing concerns communication. So even if businesses are not coming across specific questions regarding the use of AI when they're renewing or placing professional liability cover, companies should always ensure that they're proactively updating their insurers about the tech that they are using to deliver their services. And that's to ensure that businesses discharge their obligation to give a fair presentation of the risk to insurers at the time of placement or on variation or renewal of the policy pursuant to the Insurance Act 2015. It's also practically important to disclose to insurers fully so that they understand how the business utilizes AI and you can then avoid coverage-related issues down the line if a claim does arise. Better to have that all dealt with up front. The third step is about human involvement and maintaining robust risk management processes for the use of AI. Businesses need to ensure that there is some human supervision with any tasks involving AI and that all of the output from the AI is thoroughly checked. So businesses should be adopting internal policies and frameworks to outline the permitted use of AI in the delivery of services by their business. And finally, I think it's very important to focus on transparency with clients. You know, clients should be informed if any AI tech has been used in the delivery of services. And indeed, some clients may say that they don't want the professional services provider to utilize AI in the delivery of services. And businesses must be familiar with any restrictions that have been put in place by a client. So in other words, informed consent for the use of AI should be obtained from the client where possible. I think these should collectively help, these steps should collectively help all parties begin to understand where the liability lies, Emily. Do you have anything to add? Emily: I see. So it's basically all about taking a proactive rather than a reactive attitude to this. Though times may be uncertain, companies should certainly be preparing for what is to come. In terms of anything to add, I would also just like to quickly mention that if a firm uses a third-party AI tool instead of its own tool, risk management can become a little more complex. This is because if a firm develops their own AI tool, they know how it works and therefore any risks that could manifest from that. This makes it easier to perform internal checks and also obtain proper informed consent from clients as they'll have more information about the technology that is being utilized. Whereas if a business uses a third-party technology, although though in some cases this might be cheaper, it is harder to know the associated risk. And I would also say that jurisdiction comes into this. It's important that any global professional service business looks at the legal and regulatory landscape in all the countries that they operate. There is not a globally uniform approach to AI, and how to utilize it and regulate it is changing. So, companies need to be aware of where their outputs are being sent and ensure that their risks are managed appropriately. Laura-May: Yes, I agree. All great points, Emily. So in the future, what do you think we can be expecting from insurers? Emily: So I know you mentioned earlier about how time progresses, we can expect to see more precise policy. At the moment, I think it is fair to say that insurers are interested in understanding how AI is being used in businesses. It's likely that as time goes on and insurers begin to understand the risks involved, they will start to modify their policies and ask additional questions of their clients to better tailor their covered risks. For example, certain insurers may require that insureds declare their AI usage and provide an overview of the possible consequences if that technology fails. Another development we can expect from the market is new insurance products created solely for the use of AI. Laura-May: Yes, I entirely agree. I think we will see more specific insurance products that are tailored to the use of AI. So in summary, businesses should focus on their risk management practices regarding the use of AI and ensure that they're having discussions with their insurers about the use of the new technologies. These conversations around responsibility, transparency and collaboration will undoubtedly continue to shape professional liability insurance in the AI era that we're now in. And indeed, by understanding their own AI systems and engaging with insurers and setting clear expectations with clients, companies can stay ahead. Anything more there, Emily? Emily: Agreed. It's all about maintaining that balance between innovation and responsibility. While AI holds tremendous potential, it also demands accountability from the professionals who use it. So all that's left to say is thank you for listening to this episode and look out for the next one. And if you enjoyed this episode, please subscribe, rate, and review us on your favorite podcast platform and share your thoughts and feedback with us on our social media channels. Laura-May: Yes, thanks so much. Until next time. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

Our latest podcast covers the legal and practical implications of AI-enhanced cyberattacks; the EU AI Act and other relevant regulations; and the best practices for designing, managing and responding to AI-related cyber risks. Partner Christian Leuthner in Frankfurt, partner Cynthia O'Donoghue in London with counsel Asélle Ibraimova share their insights and experience from advising clients across various sectors and jurisdictions. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Christian: Welcome to the Tech Law Talks, now new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI and cybersecurity threat. My name is Christian Leutner. I'm a partner at the Reed Smith Frankfurt office, and I'm with my colleagues Cynthia O'Donoghue and Asélle Ibraimova from the London office. Cynthia: Morning, Christian. Thanks. Asélle: Hi, Christian. Hi, Cynthia. Happy to be on this podcast with you. Christian: Great. In late April 2024, the German Federal Office for Information Security has identified that AI, and in particular, generative AI and large language models, LLM, is significantly lowering the barriers to entry for cyber attacks. The technology, so AI, enhances the scope, speed, and the impact of cyber attacks, of malicious activities, because it simplifies social engineering, and it really makes the creation or generation of malicious code faster, simpler, and accessible to almost everybody. The EU legislator had some attacks in mind when creating the AI Act. Cynthia, can you tell us a bit about what the EU regulator particularly saw as a threat? Cynthia: Sure, Christian. I'm going to start by saying there's a certain irony in the EU AI Act, which is that there's very little about the threat of AI, even though sprinkled throughout the EU AI Act is lots of discussion around security and keeping AI systems safe, particularly high-risk systems. But the EU AI Act contains a particular article that's focused on the design of high-risk systems and cybersecurity. And the main concern is really around the potential for data poisoning and for model poisoning. And so part of the principle behind the EU AI Act is security by design. And so the idea is that the EU AI Act regulates high-risk AI systems such that they need to be designed and developed in a way that ensures an appropriate level of accuracy robustness, and cybersecurity. And to prevent such things as data poisoning and model poisoning. And it also talks about the horizontal laws across the EU. So because the EU AI Act treats AI as a product, it brings into play other EU directives, like the Directive on the Resilience of Critical Entities and the newest cybersecurity regulation in relation to digital products. And I think when we think about AI, you know, most of our clients are concerned about the use of AI systems and being, let's say, ensuring that they're secure. But really, you know, based on that German study you mentioned at the beginning of the podcast, I think there's less attention paid to use of AI as a threat vector for cybersecurity attacks. So, Christian, what do you think is the relationship between the AI Act and the Cyber Resilience Act, for instance? Christian: Yeah, I think, and you mentioned it already. So the legislator thought there is a link and the high-risk AI models need to implement a lot of security measures. And the latest Cyber Resilience Act requires some stakeholders in software and hardware products to also implement security measures and also imposes another or lots of different obligations on them. To not over-engineer these requirements, the AI Act already takes into account that if a high-risk AI model is in scope of the Cyber Resilience Act, the providers of those AI models can refer to the implementation of the cybersecurity requirements they made under the Cyber Resilience Act. So they don't need to double their efforts. They can just rely on what they have implemented. But it would be great if we're not only applying the law, but if there would also be some guidance from public bodies or authorities on that. Asélle, do you have something in mind that might help us with implementing those requirements? Asélle: Yeah, so ENISA has been working on AI and cybersecurity in general, and it has produced a paper called Multi-Layer Framework for Good Cybersecurity Practices for AI last year. So it still needs to be updated. However, it does provide a very good summary of various AI initiatives throughout the world. And generally mentions that when thinking of AI, organizations need to take into consideration the general system vulnerabilities, the vulnerabilities in the underlying ICT infrastructure. And also when it comes to the use of AI models or systems, then, you know, various threats that you already talked about, such as data poisoning and model poisoning and other kind of adversarial attacks on those systems should also be taken into account. So in terms of specific kind of guidelines or standards that ENISA mentioned is ISO 4201. It's an AI management system standard. And also another noteworthy guidelines mentioned is the NIST AI risk management framework, obviously the US guidelines. And obviously both of these are to be used on a voluntary basis. But basically, their aim is to ensure developers create trustworthy AI, valid, reliable, safe, and secure and resilient. Christian: Okay, that's very helpful. I think it's fair to say that AI will increase the already high likelihood of being subject to cyber attack at some point, that this is a real threat to our clients. And we all know from practice that you cannot defend against everything. You can be cautious, but there might be occasions when you are subject to an attack, when there has been a successful attack or there is a cyber incident. If it is caused by AI, what do we need to do as a first responder, so to say? Cynthia: Well, there are numerous notification obligations in relation to attacks. Again, depending on the type of data or the entity involved. For instance, if the, As a result of a breach from an AI attack, it involves personal data, then there's notification requirements under the GDPR, for instance. If you're in a certain sector that's using AI, one of the newest pieces of legislation to go into effect in the EU, the Network and Information Security Directive, tiers organizations into essential entities and important entities. And, you know, depending on whether the sector the particular victim is in is subject to either, you know, the essential entity requirements or the important entity requirements, there's a notification obligation under NIST-2, for short, in relation to vulnerabilities and attacks. And ENISA, who Asélle was just talking about, has most recently issued a report for, let's say, network and other providers, which are essential entities under NIST-2, in relation to what is considered a significant or a vulnerability or a material event that would need to be notified to the regulatory entity and the relevant member state for that particular sector. And I'm sure there's other notification requirements. I mean, for instance, financial services are subject to a different regulation, aren't they, Asélle? And so why don't you tell us a bit more about the notification requirements of financial services organizations? Asélle: The EU Digital Operational Resilience Act also provides similar requirements to the supply chain of financial entities, specifically the ICT third-party providers, which the AI providers may fall into. And Article 30 under DORA requires that there are specific, for example, contractual clauses requiring cybersecurity around data. So it requires provisions on availability, authenticity, integrity, and confidentiality. There are additional requirements to those ICT providers whose product, say AI product, perhaps as an ICT product, plays a critical or important function in the provision of the financial services. In that case... There will be additional requirements, including on ICT security measures. So in practical terms, it would mean your organizations that are regulated in this way, they are likely to ask AI providers to have additional tools, policies, measures, and to provide evidence that such measures have been taken. It's also worth mentioning about the developments on AI regulation in the UK. Previous UK government wanted to adopt a flexible, non-binding regulation of AI. However, the Labour Government appears to want to adopt a binding instrument. However, it is likely to be of limited scope, focusing only on the most powerful AI models. However, there isn't any clarity in terms of whether the use of AI in cyber threats is regulated in any specific way. Christian, I wanted to direct a question to you. How about the use of AI in supply chains? Christian: Yeah, I think it's very important to have a look on the entire supply chain of the companies or the entire contractual relationships. Because most of our clients or companies out there do not develop or create their own AI. They will use AI from vendors or their suppliers or vendors will use AI products to be more efficient. And all the requirements, for example, the notification requirements that Cynthia just mentioned, they do not stop if you use a third party. So even if you engage a supplier, a vendor, you're still responsible to defend against cyber attacks and to report cyber incidents or attacks if they concern your company. Or at least there's a high likelihood. So it's very crucial to have those scenarios in your mind when you're starting a procurement process and you start negotiating on contracts to have those topics in the contract with a vendor, with a supplier to have notification obligations in case there is a cyber attack at that vendor that you probably have some audit rights, inspection rights, depending on your negotiation position, but at least to make sure that you are aware if something happens so that the risk that does not really or does not directly materializes at your company cannot sneak through the back door by a vendor. So that's really important that you always have an eye on your supply chain and on your third-party vendors or providers. Cynthia: That's such a good point, Christian. And ultimately, I think it's best for organizations to think about it early. So it really needs to be embedded as part of any kind of supply chain due diligence, where maybe a new question needs to be added to a due diligence questionnaire on suppliers about whether they use AI, and then the cybersecurity around the AI that they use or contribute to. Because we've all read and heard in the papers and been exposed to through client counseling of cybersecurity breaches that have come through the supply chain and may not be direct attacks on the client itself. And yeah, I mean, the contractual provisions then are really important. Like you said, making sure that the supplier notifies the customer very early on. And then there is cooperation and audit mechanisms. Asélle, anything else to add? Asélle: Yeah, I totally agree with what was said. I think beyond just the legal requirements, it is ultimately the question of defending your business, your data, and whether or not it's required by your customers or by specific legislation to which your organization may be subject to. It's ultimately whether or not your, business can withstand more sophisticated cyber attacks and therefore agree with both of you that organizations should take supply chain resilience and cyber security and generally higher risks of cyber attacks more seriously and put measures in place better to invest now than later after the attack. I also think that it is important for in-house teams to work together as cyber security threats are enhanced by AI. And these are legal, IT security, risk management, and compliance teams. Sometimes, for example, legal teams might think that the IT security or incident response policies are owned by IT, so there isn't much contribution needed. Or the IT security teams might think the legal requirements are in the legal team's domain, so we'll wait to hear from legal on how to reflect those. So working in silos is not beneficial. IT policies, incident response policies, training material on cybersecurity should be regularly updated by IT teams and reviewed by legal to reflect the legal requirements. The teams should collaborate on running tabletop incident response and crisis response exercises, because in the real case scenario, they will need to work hand in hand to efficiently respond to these scenarios. Cynthia: Yeah, I think you're right, Asélle. I mean, obviously, any kind of breach is going to be multidisciplinary in the sense that you're going to have people who understand AI, understand, you know, the attack vector, which used the AI. You know, other people in the organization will have a better understanding of notification requirements, whether that be notification under the cybersecurity directives and regulations or under the GDPR. And obviously, if it's an attack that's come from the supply chain, there needs to be that coordination as well with the supplier management team. So it's definitely multidisciplinary and requires, obviously cooperation and information sharing and obviously in a way that's done in accordance with the regulatory requirements that we've talked about. So in sum, you have to think about AI and cybersecurity both from a design perspective as well as the supply chain perspective and how AI might be used for attacks, whether it's vulnerabilities into a network or data poisoning or model poisoning. Think about the horizontal requirements across the EU in relation to cybersecurity requirements for keeping systems safe and or if you're an unfortunate victim of a cybersecurity attack where AI has been used to think about the notification requirements and ultimately that multidisciplinary team that needs to be put in place. So thank you, Asélle, and thank you, Christian. We really appreciate the time to talk together this morning. And thank you to our listeners. And please tune in for our next Tech Law Talks on AI. Asélle: Thank you. Christian: Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

1 AI explained: AI regulations and PRC court decisions in China 21:08
21:08
Play Later
Play Later
Lists
Like
Liked21:08
Reed Smith lawyers Cheryl Yu (Hong Kong) and Barbara Li (Beijing) explore the latest developments in AI regulation and litigation in China. They discuss key compliance requirements and challenges for AI service providers and users, as well as the emerging case law on copyright protection and liability of AI-generated content. They also share tips and insights on how to navigate the complex and evolving AI legal landscape in China. Tune in to learn more about China’s distinct approach to issues involving AI, data and the law. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Cheryl: Welcome to our Tech Law Talks and new series on artificial intelligence. Over the months, we have been exploring the key challenges and opportunities within the rapidly involving AI landscape. Today, we will focus on AI regulations in China and the relevant PRC court decisions. My name is Cheryl Yu, a partner in the Hong Kong office at Reed Smith, and I'm speaking today with Barbara Li, who is a partner based in our Beijing office. Barbara and I are going to focus on the major legal regulations on AI in China and also some court decisions relating to AI tours to see how China's legal landscape is evolving to keep up with the technological advancements. Barbara, can you first give us an overview about China's AI regulatory developments? Barbara: Sure. Thank you, Cheryl. Very happy to do that. In the past few years, the regulatory landscape governing AI in China has been evolving at a very fast pace. Although China does not have a comprehensive AI as a EU AI act, China has been leading the way in rolling out multiple AI regulations governing generative AI, debate technologies, and algorithms. In July 2023, China issued the Generative AI Measures, which becomes one of the first countries in the world to regulate generative AI technologies. These measures apply to generative AI services offered to the public in China, regardless of whether the service provider is based in China or outside China. And international investors are allowed to set up local entities in China to develop and offer AI services in China. In relation to the legal obligation, the measures lay down a wide range of legal requirements in performing and using generative AI services. Including content screening, protection of personal data and privacy, and safeguarding IPR and trade secrets, and also taking effective measures to prevent discrimination, when the company's design algorithm chooses a training data or creates a large language model. Cheryl: Many thanks, Barbara. These are the very important compliance obligations that business should not neglect when engaging in development of AI technologies, products, and services. I understand that one of the biggest concerns in AI is how to avoid hallucination and misinformation. I wonder if China has adopted any regulations to address these issues? Barbara: Oh, yes, definitely, Cheryl. China has adopted multiple regulations and guidelines to address these concerns. For example, the Deep Synthesis Rule, which became effective from January 2023, and this regulation aims to have a governance over the use of deep-fake technologies in generating or changing digital content. And when we talk about digital content, the regulation refers to a wide range of digital media, including video, voices, text, and images. And the deep synthesis service providers, they must refrain from using deep synthesis of services to produce or disseminate illegal information. And also, the companies are required to establish and improve proper compliance or risk management systems. Such as having the user registration system, doing the ethics review of the algorithm, and also protecting personal information, and also taking measures to protect IT and also prevent misinformation and fraud, and also, last but not least, setting up a response to the data breach. In addition, China's National Data and Cybersecurity Regulator, which is CAC, have issued a wide range of rules on algorithm fighting. And also, these algorithm fighting requirements have become effective from June 2024. According to this 2024 regulation, if a company uses algorithms in its online services with the functions of blogs, chat rooms, public accounts, short videos, or online streaming, So these staff functions are required of being capable of influencing public opinion or driving social engagement. And then the service provider is required to file its algorithm with the CAC, the regulator, within 10 working days after the launch of the service. So in order to finish the algorithm filing, the company is required to put together a comprehensive information documentation. Those information and documentation include the algorithm assessment report, security monitoring policy, data breach response plan, and also some technical documentation to explain the function of the algorithm. And also, the CAC has periodically published a list of filed algorithms, and also up to 30th of June 2024, we have seen over 1,400 AI algorithms which have been developed by more than 450 companies, and those algorithms have been successfully filed by the CAC. So you can see this large number of AI algorithm findings indeed have highlighted the rapid development of AI technologies in China. And also, we should remember that the large volume of data is a backbone of AI technologies. So we should not forget about the importance of data protection and privacy obligations when you develop and use AI technologies. Over the years, China has built up a comprehensive data and privacy regime with the three pillars of national laws. Those laws include the Personal Information Protection Law, normally in short name PIPL, and also the Cybersecurity Law and Data Security Law. So the data protection and cybersecurity compliance requirements got to be properly addressed when companies develop AI technologies, products, and services in China. And indeed, there are some very complicated data requirements and issues under the Chinese data and cybersecurity laws. For example, how to address the cross-border data transfer. So it's very important to remember those requirements. China data requirement and the legal regime is very complex. So given the time constraints, probably we can find another time to specifically talk about the data issues under the Chinese. Cheryl: Thanks, Barbara. Indeed, there are some quite significant AI and data issues which would warrant more time for a deeper dive. Barbara, can you also give us some update on the AI enforcement status in China and share with us your views on the best practice that companies can take in mitigating those risks? Barbara: Yes, thanks, Cheryl. Indeed, Chinese AI regulations do have keys. For example, the violation of the algorithm fighting requirement can result in fines up to RMB 100,000. And also the failure to comply with those compliance requirements in developing and using technologies can also trigger the legal liability under the Chinese PIPL, which is Personal Information Protection Law, and also the cyber security law and the data security law. And under those laws, a company can be imposed a monetary fine up to RMB 15 million or 5% of its last year turnover. In addition, the senior executives of the company can be personally subject to liability, such as a penalty up to a fine up to 1 million RMB, and also the senior executives can be barred from taking senior roles for a period of time. In the worst scenario, criminal liability can be pursued. So, in the first and second quarters of this year, 2024, we have seen some companies have been caught by the Chinese regulators for failing to comply with the AI requirements, ranging from failure to monitor the AI-generated content or neglecting the AI algorithm-finding requirements. Noncompliance has resulted in the suspension of their mobile apps pending ratification. As you can see, that noncompliance risk is indeed real, so it's very important for the businesses to pay close attention to the relevant compliance requirements. So to just give our audience a few quick takeaways in terms of how to address the AI regulatory and legal risk in China, we would say probably the companies can consider three most important compliance steps. The first is that with the faster development of AI in China, it's crucial to closely monitor the legislative and enforcement development in AI, data protection, and cybersecurity. security. While the Chinese AI and data laws share some similarities with the laws in other countries, for example, the EU AIF and the European GDPR, Chinese AI and data laws and regulations indeed have its unique characteristics and requirements. So it's extremely important for businesses to understand the Chinese AI and data laws, conduct proper analysis of the key business implications. And also take appropriate compliance action. So that is number one. And the second one, I would say, in terms of your specific AI technologies, products and services rolling out in the China market, it's very important to do the required impact assessment to ensure compliance with accountability, bias, and also accessibility requirements, and also build up a proper system for content monitoring. If your algorithm falls within the scope subject to fighting requirements, you definitely need to prepare the required documents and finish the algorithm fighting as soon as possible to avoid the potential penalties and compliance rates. And the third one is that you should definitely prepare the China AI policies, the AI terms of use, and build up your AI governance and compliance mechanism in line with the evolving Chinese AI regulation, and also train your team on the use of AI for compliance in their day-to-day work. So it's also very important, very interesting to note that in the past month, Chinese schools have given some landmark rulings in trials in relation to AI technology. Those rulings cover various AI issues, ranging from copyright protection of AI-generated content, data scraping, and privacy. Cheryl, can you give us an overview about those cases and what takeaways we can get from those rulings? Cheryl: Yes, thanks, Barbara. As mentioned by Barbara, with the emerging laws in China, there have been a lot of questions relating to AI technologies which are interacted with copyright law. The most commonly discussed questions include if users instruct an AI tour to produce an image, who is the author of the work, the AI tour, or the person giving instructions to the AI tour. And if the AI tour generates a work that bears a strong resemblance to another work already published, would that constitute an infringement of copyright? Before 2019, the position in China was that works generated by AI machines generally were not subject to copyright protection. For a work to be copyrightable, the courts will generally consider whether the work is created by natural persons and whether the work is original. Subsequently, there has been a shift in the Chinese court's position, in which the courts are more inclined to protect the copyrights of AI-generated content. For example, the Nanshan District Court of Shenzhen handed down a decision, Shenzhen Tencent versus Shanghai Yinsheng, in 2019. The court held that the plaintiff, Shenzhen Tencent, should be regarded as the author of an article, which was generated by an AI system at the supervision of the plaintiff. The court further held that the intellectual contribution of the plaintiff's staff, including inputting data, setting prompts, selecting the template, and the layout of the article, played a direct role in shaping the specific expression of the article. Hence, the article demonstrated sufficient originality and creativity to warrant copyright protection. Similarly, the Beijing Internet Court reached the same decision in Li Yunkai v. Liu Yuanchun in 2023, and the court held that AI-generated content can be subject to copyright protection if the human user has contributed substantially to the creation of the work. In its judgment, the court ruled that an AI machine cannot be an author of the work, since it is not human. And the plaintiff is entitled to the copyright of the photo generated by the AI machine on the grounds that the plaintiff personally chose and arranged the order of prompts, set the parameters, and detected the style of the output, which warrants a sufficient level of originality in the work. As you may note, in both cases, for work to be copyrightable in China, the courts no longer required it to be created entirely by a human being. Rather, the courts focused on whether there was an element of original intellectual achievement. Interestingly, there's another case handed down by the Hangzhou Internet Court in 2023, which has been widely criticized in China. This court decided that the AI was not an author, not because it was non-human, but because it was a weak AI and did not possess the relevant capability for intellectual creation. And this case has created some uncertainty as to what is the legal status of the AI if it is stronger and has the intellectual capability to generate original works, and the questions such as, would such an AI be qualified as an author and be entitled to copyright over its works? Those issues remain to be seen as the technology and law develops. Barbara: Thank you, Cheryl. We now understand the position in relation to the authorship under the Chinese law. What about the plaintiffs? What about the platforms which provide generative AI tools? I understand that they also face the question of whether there will be secondary level for infringement of AI generated content output. Have the Chinese courts issued any case on this topic? Cheryl: Many thanks, Barbara. Yes, there's some new development on this issue in China in early 2024. And the Guangzhou Internet Court published a decision on this issue, which is the first decision in China regarding the secondary liability of AI platform providers. And the plaintiff in this case has exclusive rights to a Japanese cartoon image, the Ultraman, including various rights such as reproduction, adaptation, etc. And the defendant was an undisclosed AI company that operates a website with AI conversation function and AI image generation function. These functions were provided using an unnamed third-party provider's AI model, which was connected to the defendant's website. The defendant allowed visitors to their website to use this AI model to generate images, but it hadn't created the AI model themselves. The plaintiff eventually discovered that if one input prompts related to Ultraman, the the generative AI tool would produce images highly similar to Ultraman. Then the plaintiff eventually brought an action of copyright infringement against the defendant. And the court held that, in this case, the defendant platform has breached a duty of care to take appropriate measures to ensure that outputs do not contribute any copyright law and the relevant AI regulations in China. And the output that the AI generative tool created has infringed on the copyright of the other protected works. So this Ultraman case serves a timely reminder to Chinese AI platform providers that it is of utmost importance to comply with the relevant laws and regulations in China. And another interesting point of law is the potential liability of AI developers in the scenario that copyright materials are used to train the AI tour. So far, there haven't been any decisions relating to this issue in China, and it remains to be seen whether AI model developers would be liable for infringement of copyright in the process of training their AI models with copyrightable materials, and if so, whether there are any defenses available to them. We shall continue to follow up and keep everyone posted in this regard. Barbara: Yes, indeed Cheryl those are all very interesting developments. So to conclude for our podcast today, with the advancement of AI technology, it's almost inevitable that more legal challenges will emerge related to the training and application of a generative AI system. To this course, we have been expected to develop innovative legal interpretations to strike a balance between safeguarding copyright and promoting the technology innovation and growth. So our team, Reed Smith in Greater China, will bring all the updates to you on the development. So please do stay tuned. Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

Reed Smith partners Claude Brown and Romin Dabir discuss the challenges and opportunities of artificial intelligence in the financial services sector. They cover the regulatory, liability, competition and operational risks of using AI, as well as the potential benefits for compliance, customer service and financial inclusion. They also explore the strategic decisions firms need to make regarding the development and deployment of AI, and the role of regulators play in supervising and embracing AI. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Claude: Welcome to Tech Law Talks and our new series on artificial intelligence, or AI. Over the coming months, we'll explore key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI in financial services. And to do that, I'm here. My name is Claude Brown. I'm a partner in Reed Smith in London in the Financial Industry Group. And I'm joined by my colleague, Romin Dabir, who's a financial services partner, also based in London. Romin: Thank you, Claude. Good to be with everyone. Claude: I mean, I suppose, Romin, one of the things that strikes me about AI and financial services is it's already here. It's not something that's coming in. It's been established for a while. We may not have called it AI, but many aspects it is. And perhaps it might be helpful to sort of just review where we're seeing AI already within the financial services sector. Romin: Yeah, absolutely. No, you're completely right, Claude. Firms have been using AI or machine learning or some form of automation in their processes for quite a while, as you rightly say. And this has been mainly driven by searches for efficiency, cost savings, as I'm sure the audience would appreciate. There have been pressures on margins and financial services for some time. So firms have really sought to make their processes, particularly those that are repetitive and high volume, as efficient as possible. And parts of their business, which AI has already impacted, include things like KYC, AML checks, back office operations. All of those things are already having AI applied to them. Claude: Right. I mean, some of these things sound like a good thing. I mean, improving customer services, being more efficient in the know-your-customer, anti-money laundering, KYC, AML areas. I suppose robo-advice, as it's called sometimes, or sort of asset management, portfolio management advice, might be an area where one might worry. But I mean, the general impression I have is that the regulators are very focused on AI. And generally, when one reads the press, you see it being more the issues relating to AI rather than the benefits. I mean, I'm sure the regulators do recognize the benefits, but they're always saying, be aware, be careful, we want to understand better. Why do you think that is? Why do you think there's areas of concern, given the good that could come out of AI? Romin: No, that's a good question. I think regulators feel a little bit nervous when confronted by AI because obviously it's novel, it's something new, well, relatively new that they are still trying to understand fully and get their arms around. And there are issues that arise where AI is applied to new areas. So, for example, you give the example of robo-advice or portfolio management. Now, these were activities that traditionally have been undertaken by people. And when advice or investment decisions are made by people, it's much easier for regulators to understand and to hold somebody accountable for that. But when AI is involved, responsibility sometimes becomes a little bit murkier and a little bit more diffuse. So, for example, you might have a regulated firm that is using software or AI that has been developed by a specialist software developer. And that software is able to effectively operate with minimal human intervention, which is really one of the main drivers behind it, behind the adoption of AI, because obviously it costs less, it is less resource intensive in terms of skilled people to operate it. It but under those circumstances who has the regulatory responsibility there is it the software provider who makes the algorithm programs the software etc etc and then the software goes off and makes decisions or provides the advice or is it the firm who's actually running the software on its systems when it hasn't actually developed that software? So there are some naughty problems i think that regulators are are still mulling through and working out what they think the right answers should be. Claude: Yeah I can see that because I suppose historically the the classic model certainly in the UK has been the regulator say if you want to outsource something thing. You, the regulated entity, be you a broker or asset manager or a bank, you are, or an investment firm, you are the authorized entity, you're responsible for your outsourcer or your outsource provider. But I can see with AI, that must get a harder question to determine, you know, because say your example, if the AI is performing some sort of advisory service, you know, has the perimeter gone beyond the historically regulated entity and does it then start to impact on the software provider. That's sort of one point and you know how do you allocate that responsibility you know that strict bright line you want to give it to a third party provider it's your responsibility. How do you allocate that responsibility between the two entities even outside the regulator's oversight, there's got to be an allocation of liability and responsibility. Romin: Absolutely. And as you say, with traditional outsource services, it's relatively easy for the firm to oversee the activities of the outsource services provider. It can get MI, it can have systems and controls, it can randomly check on how the outsource provider is conducting the services. But with something that's quite black box, like some algorithm, trading algorithm for portfolio management, for example, it's much harder for the firm to demonstrate that oversight. It may not have the internal resources. How does it really go about doing that? So I think these questions become more difficult. And I suppose the other thing that makes it more difficult with AI to the traditional outsourcing model, even the black box algorithms, is by and large they're static. You know, whatever it does, it keeps on doing. It doesn't evolve by its own processes, which AI does. So it doesn't matter really whether it's outsourced or it's in-house to the regulated entity. That thing's sort of changing all the time and supervising it is a dynamic process and the speed at which it learns which is in part driven by its usage means that the dynamics of its oversight must be able to respond to the speed of it evolving. Romin: Absolutely and and you're right to highlight all of the sort of liability issues that arise, not just simply vis-a-vis sort of liabilities to the regulator for performing the services in compliance with the regulatory duties, but also to clients themselves. Because if the algo goes haywire and suddenly, you know, loses customers loads of money or starts making trades that were not within the investment mandate provided by the client where does the buck stop is that with the firm is that with the person who who provided the software it's it's all you know a little difficult. Claude: I suppose the other issue is at the moment there's a limited number of outsourced providers and. One might reasonably expect competition being what it is for that to proliferate over time but until it does I would imagine there's a sort of competition issue a not only a competition issue in one system gaining a monopoly but that particular form of large model learning then starts to dominate and produce, for want of a better phrase, a groupthink. And I suppose one of the things that puzzles me is, is there a possibility that you get a systemic risk by the alignment of the thinking of various financial institutions using the same or a similar system of AI processes, which then start to produce a common result? And then possibly producing a common misconception, which introduces a sort of black swan event that was anticipated. Romin: And sort of self-reinforcing feedback loops. I mean, there was the story of the flash crash that occurred with all these algorithmic trading firms all of a sudden reacting to the same event and all placing sell orders at the same time, which created a market disturbance. That was a number of years ago now. You can imagine such effects as AI becomes more prevalent, potentially being even more severe in the future. Claude: Yeah, no, I think that's, again, an issue that regulators do worry about from time to time. Romin: And I think another point, as you say, is competition. Historically, asset managers have differentiated themselves on the basis of the quality of their portfolio managers and the returns that they deliver to clients, etc. But here in a world where we have a number of software providers, maybe one or two of which become really dominant, lots of firms are employing technology provided by these firms, differentiating becomes more difficult in those circumstances. Claude: Yeah and I guess to unpack that a little bit you know as you say portfolio managers have distinguished themselves by better returns than the competition and certainly better returns than the market average and that then points to the quality of their research and their analytics. So then i suppose the question becomes to what extent is AI being used to produce that differentiator and how do you charge your you know your your fees based on that is this you've got better technology than anyone else or is it you've got a better way to deploy the technology or is it that you've just paid more for your technology you know how did how because transforming the input of AI into the analytics and the portfolio management. Is quite a difficult thing to do at the best of times. If it's internal, it's clearly easier because it's just your mousetrap and you built that mousetrap. But when you're outsourcing, particularly in your example, where you've got a limited number of technology providers, that split I can see become quite contentious. Romin: Yeah, absolutely. Absolutely. And I think firms themselves will need to sort of decide what approach they are going to take to the application of AI, because if they go down the outsourced approach, that raises issues that we've discussed so far. Conversely if they adopt a sort of in-house model they have more control the technology's proprietary potentially they can distinguish themselves and differentiate themselves better than relying on an outsource solution but then you know cost is far greater will they have the resources resources expertise and really to compete with these large specialist providers to many different firms. There's lots of strategic decisions that firms need to make as well. Claude: Yeah but I mean going back to the regulators for a moment Romin, it does seem to me that you know there are some benefits to regulators in embracing AI within their own world because certainly we already see the evidence that they're very comfortable using manipulation of large databases. For example, it's trade repositories or it's trade reporting. We can see sort of enforcement actions being brought using databases that have produced the information the anomalies and as I see it AI can only improve that form of surveillance enforcement whether that is market manipulation or insider dealing or looking across markets to see whether sort of concurrent or collaborative activity is engaged and it may not get to the point where the AI is going to to bring the whole enforcement action to trial. But it certainly makes that demanding surveillance and oversight role for a regulator a lot easier. Romin: Absolutely. Couldn't agree more. I mean, historically, firms have often complained. And it's a very common refrain in the financial services markets We have to make all these ridiculous reports, detailed reports, send all this information to the regulator. And firms were almost certain that it would just disappear into some black hole and never be looked at again. Again, historically, that was perhaps true, but with the new technology that is coming on stream, it gives regulators much more opportunity to meaningfully interrogate that data and use it to either bring enforcement action against firms or just supervise trends, risks, currents in markets which might otherwise not have been available or apparent to them. Claude: Yeah, I mean, I think, to my mind, data before you apply technology to it, it's rather like the end of Raiders of the Lost Ark in the Spielberg film, you know, where they take the Covenant and push it into that huge warehouse and the camera pans back and you just see massive, massive data. But I suppose you're right with AI, that you can go and find the crate with the thing in other Spielberg films are available. it seems to me almost inexorable that the use of AI in financial services will increase and you know the potential and the efficiencies particularly with large scale and repetitive tasks and more inquiry it's not just a case of automation it's a case of sort of overseeing it but I suppose that begs a bit of a question as to who's going to be the dominant force in in the market. Is it going to be the financial services firms or the tech firms that can produce more sophisticated AI models. Romin: Absolutely, I mean I think we've seen amongst the AI companies themselves so you know the the key players like google open AI microsoft there's a bit of an arms race between themselves as to the best LLM who can come up with the most accurate, best, fastest answers to queries. I think within AI and financial services, it's almost inevitable that there'll be a similar race. And I guess the jury's still out as to who will win. Will it be the financial services firms themselves, or will it be these specialist technology companies that apply their solutions in the financial services space? I don't know, but it will certainly be interesting to see. Claude: Well, I suppose the other point with the technology providers, and you're right, I mean, you can already see that when you get into cloud-based services and software as a service and the others, that the technology is becoming a dominant part of financial services, not necessarily the dominant, but certainly a large part of it. And that, to my mind, has a really interesting question about the commonality of technology and I in general and AI in particular in you, know you can now see the these services and particularly you know and I can see this with AI as well entering into a number of financial sectors which historically have been diffused so the use of AI for example in insurance the the use in banking, the use in asset management, the use in broking, the use in advisory services, there's now a coming together of the platforms and the technology, such as LLM, across all of them. And that then begs the question, is there an operational resilience question? It's almost like, does AI ever become so pervasive that is a bit like electricity, power. You can see the CrowdStrike. Is the technology so all-pervasive that actually it produces an operational risk concern that would cause a regulator to take it to an extreme, to alter the operational risk charge in the regulatory capital environment? Romin: Yeah, exactly. I think these are certainly is the space that regulators are looking at with increased attention, because some of the emerging risks, etc, might not be apparent. So like you mentioned with CrowdStrike, nobody really knew that this was an issue until it happened. So regulators, I think, are very nervous of the unknown unknowns. Claude: Yeah. I mean, it seems to me that AI has a huge potential in the financial services sector, in, A, facilitating the mundane, but also in being proactive in identifying anomalies. Potentials for errors, potentials for fraud. It's like, you know, there's a huge amount that it can contribute. But as always, you know, that brings structural challenges. Romin: Absolutely. And just on the point that we were discussing earlier about the increased efficiencies that it can bring to markets you know there's been a recognized problem with the so-called advice gap in the uk where the kind of mass affluent less high net worth investors aren't really willing to pay for the receipt of financial advice. As technology gets better, the cost of accessing more tailored, intelligent advice will hopefully come down, leading to the ability for people to make more sensible financial decisions. Claude: Which I'm sure was part of the responsibility of financial institutions to improve financial and fiscal education. That's going to be music to a regulator's ears. Well, Romin, interesting subject, interesting area. We live, as the Chinese say, in interesting times. But I hope to those of you who've listened, it's been interesting. We've enjoyed talking about it. Of course, if you have any questions, please feel free to contact us, my colleague, Romin Dabir, or myself, Claude Brown. You can find our contact details accompanying this and also on our website. Thank you for listening. Romin: Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

This episode highlights the new benefits, risks and impacts on operations that artificial intelligence is bringing to the transportation industry. Reed Smith transportation industry lawyers Han Deng and Oliver Beiersdorf explain how AI can improve sustainability in shipping and aviation by optimizing routes and reducing fuel consumption. They emphasize AI’s potential contributions from a safety standpoint as well, but they remain wary of risks from cyberattacks, inaccurate data outputs and other threats. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Han: Hello, everyone. Welcome to our new series on AI. Over the coming months, we will explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, my colleague Oliver and I will focus on AI in shipping and aviation. My name is Han Deng, a partner in the transportation industry group in New York, focusing on the shipping industry. So AI and machine learning have the potential to transform the transportation industry. What do you think about that, Oliver? Oliver: Thanks, Han, and it's great to join you. My name is Oliver Beiersdorf. I'm a partner in our transportation industry group here at Reed Smith, and it's a pleasure to be here. I'm going to focus a little bit on the aviation sector. And in aviation, AI is really contributing to a wide spectrum of value opportunities, including enhancing efficiency, as well as safety-critical applications. But we're still in the early stages. The full potential of AI within the aviation sector is far from being harnessed. For instance, there's huge potential for use in areas which will reduce human workload or increase human capabilities in very complex scenarios in aviation. Han: Yeah, and there's similar potential within the shipping industry with platforms designed to enhance collision avoidance, route optimization, and sustainability efforts. In fact, AI is predicted to contribute $2.5 trillion to the global economy by 2030. Oliver: Yeah, that is a lot of money, and it may even be more than that. But with that economic potential, of course, also comes substantial risks. And AI users and operators and industries now getting into using AI have to take preventative steps to avoid cyber security attacks. Inaccurate data outputs, and other threats. Han: Yeah, and at Reed Smith, we help our clients to understand how AI may affect their operations, as well as how AI may be utilized to maximize potential while avoiding its pitfalls and legal risks. During this seminar, we will highlight elements within the transportation industry that stand to benefit significantly from AI. Oliver: Yeah, so a couple of topics that we want to discuss here in the next section, and there's really three of them which overlap between shipping and aviation in terms of the use of AI. And those topics are sustainability, safety, and business efficiency with the use of AI. In terms of sustainability, across both sectors, AI can help with route optimization, which saves on fuel and thus enhances sustainability. Han: AI can make a significant difference in sustainability across the whole of the transportation industry by decreasing emissions. For example, within the shipping sector, emerging tech companies are developing systems that can directly link the information generated about direction and speed to a ship's propulsion system for autonomous regulation. AI also has the potential to create optimized routes using sensors that track and analyze real-time and variable factors such as wind speed and current. AI can determine both the ideal route and speed for a specific ship at any point in the ocean to maximize efficiency efficiency and minimize fuel usage. Oliver: So you can see the same kind of potential in the aviation sector. For example, AI has the potential to assist with optimizing flight trajectories, including creating so-called green routes and increasing prediction accuracy. AI can also provide key decision makers and experts with new features that could transform air traffic management in terms of new technologies and operating procedures and creating greater efficiencies. Aside from reducing emissions, these advances have the potential to offer big savings in energy costs, which, of course, is a major factor for airlines and other players in the industry, with the cost of gas being a major factor in their budgets, and in particular, jet fuel for airlines. So advances here really have the potential to offer big savings that will enable both sectors to enhance profitability while decreasing reliance on fossil fuels. Han: I totally agree. And further, you know, in terms of safety. AI can be used with the transportation industry to assist with safety assessment and management by identifying, managing, and predicting various safety risks. Oliver: Right. So, in the aviation sector, AI has the potential to increase safety by driving the development of new air traffic management systems to maintain distances from aircraft. Planning safer routes, assisting in approaches to busy airports, And the development of new conflict detection, traffic advisories, and resolution tools, along with cyber resilience. What we're seeing, of course, in aviation, and there's a lot of discussion about, is the use of drones and EV tools, so electronic, vertical, takeoff, and landing aircraft. All of which add more complexity to the existing use of airspace. And you're seeing many players in the industry, including retailers who deliver products, using eVTOLs and drones to deliver product. And AI can be a useful assistant, that is, to ATM actors from planning, to operations, and really across all airspace users. It can benefit airline operators as well, who depend on predictable routine routes and services by using aviation data to predict air traffic management more accurately. Han: That's fascinating, Oliver. Same within the shipping sector, for example, AI has the capacity to create 3D models for areas and use those models to simulate the impact of disruptions that may arise. AI can also enhance safety features through the use of vision sensors that can respond to ship traffic and prevent accidents. As AI begins to be able to deliver innovative responses that enhance predictability and resilience of the traffic management system, efficiency will increase productivity and enhance use of scarce resources like airspace, runways, and stuff. Oliver: Yeah. So it'll be really interesting to follow, you know, how this develops. It's all still very new. Another area where you're going to see the use of AI, and we already are, is in terms of business efficiency, again, in both the shipping and aviation sectors. There's really a lot of potential for AI, including in generating data and cumulative reports based on real-time information. And by increasing the speed by which the information is processed, companies can identify issues early on and perform predictive maintenance to minimize disruptions. The ability to generate reports is also going to be useful in ensuring compliance with regulations and also coordinating work with contractors, vendors, partners, such as code share partners in commercial aviation and other stakeholders in the industry. Han: Yeah, and AI can be used to perform comprehensive audits to ensure that all cargo is present and that it complies with contracts, local and national regulation, which can help identify any discrepancies quickly and lead to swift resolution. AI can also be used to generate reports based on this information to provide autonomous communication within contractors about cargo location and the estimated time of arrival. Increasing communication and visibility in order to inspire trust and confidence. Aside from compliance, these reports will also be useful in ensuring efficiencies in management and business development and strategy by performing predictive analytics in various areas, such as demand forecasting. Oliver: And despite all these benefits, of course, as with any new technology, you need to weigh that against the potential risk and various things that can happen by using AI. So let's talk a little bit about cybersecurity and regulation being unable to keep pace with technology development, inaccurate data, and industry fragmentation. Things are just happening so fast that there's a huge risk. Associated with the use of artificial intelligence in many areas, but also in the transportation industry, including as a result of cybersecurity attacks. Data security breaches can affect airline operators or can also occur on vessels, in port operations, and in undersea infrastructure. Cyber criminals who are becoming more and more sophisticated can even manipulate data inputs, causing AI platforms on vessels to misidentify malicious maritime activity as legitimate trade or safe. Actors using AI are going to need to ensure the cyber safety of AI-enabled systems. I mean, that's a focus in both shipping and aviation and in other industries. Businesses and air traffic providers need to ensure that AI-enabled applications have robust cybersecurity elements built into their operational and maintenance schedules. Shipping companies will need to update their current cybersecurity systems and risk assessment plans to develop these threats and comply with relevant data and privacy laws. A real recent example is the CrowdStrike software outage on July 19th, which really affected almost every industry. But we saw it being particularly acute in the aviation industry and commercial aviation with literally thousands of flights being canceled with massive disruption to the industry. And interestingly, the CrowdStrike software outage, what we're talking about there is really software that's intended to avoid cyber criminal risk. And a, you know, a programming issue can result in, you know, systems being down and these types of massive disruptions, because of course, in both aviation and in shipping, we're so reliant on technologies and the issue of regulation and really the inability of regulators to keep up up with this incredible fast pace is another concern. And regulations are always reactive. In this instance, AI continues to rapidly develop and regulations do not necessarily effectively address AI. In its most current form. The unchecked use of AI could create and increase the risk of cybersecurity attacks and data privacy law violations, and frankly, create other risks that we haven't even been able to predict. Han: Wow, we really need to buckle up in the times of cybersecurity. And talking about inaccurate data, quality of AI depends upon the quality of its data inputs. Therefore, misleading and inaccurate data sets could lead to imprecise predictions for navigation. Alternatively, there is a risk that users may rely too heavily on AI platforms to make important decisions about collision avoidance and route optimization. And so the shipping companies must be sure to properly train their employees on the proper uses of AI. And speaking of industry fragmentation, AI is an expensive tool. Poor economies will be unable to integrate AI platforms in their maritime or aviation operations, which could fragment global trade. For example, without harmony in the AI use and proficiency, the shipping industry may see a decrease in revenue, a lack of global governance, and the rise of the black market dark fleets. Oliver: There's just so much to talk about in this area. It's really almost mind-blowing. But in conclusion, I think a couple points that have come out of our discussion is that if the industry takes action and fully captures AI-enabled value opportunities in both the short and the long terms, the potential for AI is just huge. But we have to be very mindful of the associated risks and empower private industry and governments to provide resolutions through technology, but also regulations. So thank you very much for joining us. That's it for today. And we really appreciate you listening in to our Tech Law Talks. Han: Thank you. Oliver: Thank you. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

Emerging technology lawyers Therese Craparo , Anthony Diana and Howard Womersley Smith discuss the rapid advancements in AI in the financial services industry. AI systems have much to offer but most bank compliance departments cannot keep up with the pace of integration. The speakers explain: If financial institutions turn to outside vendors to implement AI systems, they must work to achieve effective risk management that extends out to third-party vendors. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Therese: Hello, everyone. Welcome to Tech Law Talks and our series on AI. Over the coming months, We'll be exploring the key challenges and opportunities within the rapidly evolving AI landscape. And today we'll be focusing on AI in banking and the specific challenges we're seeing in the financial services industry and how the financial services industry are approaching those types of challenges with AI. My name is Therese Craparo. I am a partner in our Emerging Technologies Group here at Reed Smith, and I will let my colleagues on this podcast introduce themselves. Anthony? Anthony: Hey, this is Anthony Diana, partner in the New York office of Reed Smith, also part of the Emerging Technologies Group, and also, for today's podcast, importantly, I'm part of the Bank Tech Group. Howard: Hello, everyone. My name is Howard Womersley Smith. I'm a partner in the Emerging Technologies Group at Reed Smith in London. As Anthony says, I'm also part of the Bank Tech Group. So back to you, Therese. Therese: All right. So just to start out, what are the current developments or challenges that you all are seeing with AI in the financial services industry? Anthony: Well, I'll start. I think a few things. Number one, I think we've seen that the financial services industry is definitely all in on AI, right? I mean, there's definitely a movement in the financial services industry. All the consultants have said this, that this is one of the areas where they expect AI, including gender of AI, to really have an impact. And I think that's one of the things that we're seeing is there's a tremendous amount of pressure from the legal and compliance departments because the businesses are really pushing to be AI forward and really focusing on AI. So one of the challenges is that this is here. It’s now. It's not something you can plan for. I think half of what we're seeing is AI tools are coming out frequently, sometimes not even with the knowledge of legal compliance, sometimes with knowledge of the business, where because it's in the cloud, they just put in an AI feature. So that is one of the challenges that we're dealing with right now, which is catch up. Things are moving really quickly, and then people are trying to catch up to make sure that they're compliant with whatever regs that are out there. Howard? Howard: I agree with that. I think that banks are all in with the AI hype cycle, and I certainly think it is a hype cycle. I think that generally the sector is at the same pace, and at the moment we're looking at an uptick of interest and procurement of AI systems into the infrastructure of banks. I think that, you know, from the perspective of, you know, what the development phase is, I think we are just looking at the stage where they are buying in AI. We are beyond the look and see, the sourcing phase, looking at the buying phase and the impingement of AI into those banks. And, you know, what are the challenges there for? Well, challenges are twofold. One, it's from an existential perspective. Banks are looking to increase shareholder value, and they are looking to drive down costs, help, and we've seen that too with dependency technology that banks have had over the past 15 or more years. AI is an advantage of that, and it's an ability for banks to impose more automation within their organizations and less focus on humans and personnel. And we'll talk a bit more about what that involves and the risks, particularly, that could be created from relying solely on technology and not involving humans, which some proponents of AI anticipate. Therese: And I think what's interesting, just picking up on what both of you are saying, in terms of how those things come together, including from a regulatory perspective, is that historically the financial industry has used variations of AI in a lot of different ways for trading analysis, for data analysis and the like. Like, so it's not, the concept of AI is not unheard of in the financial services industry, but I do think is interesting to talk about Howard talking about the hype cycle around generative AI. That's what's throwing kind of a wrench in the process, not just for traditional controls around, you know, AI modeling and the like, but also for business use, right? Because, you know, as Howard's saying, the focus is currently is how do we use all of these generative AI tools to improve efficiencies, to save costs, to improve business operations, which is different than the use cases that we've seen in the past. And at the same time, Anthony, as you're saying, it's coming out so quickly and so fast. The development is so fast, relatively speaking. The variety of use cases is coming across so broad in a way that it hasn't than before. And the challenges that we're seeing is that the regulatory landscape, as usual with technology, isn't really keeping up. We've got guidance coming from, you know, various regulators in the U.S. The SEC has issued guidance. FINRA has issued guidance. The CFPB has issued guidance. And all of their focus is a little bit different in terms of their concerns, right? There's concerns about ethical use and the use with consumers and the accuracy and transparency and the like. But there's concerns about disclosure and appropriate due diligence and understanding of the AI that's being used. And then there's concerns about what data it's being used on and the use of AI on highly confidential information like MNPI, like CSI, like consumer data and the like. And none of it is consolidated or clear. And that's in part because the regulators are trying to keep up. And they do tend not to want to issue strict guidance on technology as it's developing, right, because they're still trying to figure out what the appropriate use is. So we have this sort of confluence of brand new use cases, democratization, the ability to, you know, extend the use of AI very broadly to users, and then the speed of development that I think the financial services industry is struggling to keep up with themselves. Anthony: Yeah, and I think the regulators have been pretty clear on that point. Again, they're not giving specific guidance, I would say, but they say two of the things that they most are concerned with is like the AI washing, which is, and they've already done some finds where if you tout you're using AI, you know, for trading strategies or whatever, and you're not, that you're going to get dinged. So that's obviously going to be part of whatever financial services due diligence you're going to be doing on a product, like making sure that actually is AI is going to be important, because that's something the regulators care about. And then the other thing, as you said, is it's the sensitive information, whether it's material, non-public information. I expect, like you said, the confidential supervisory information, any AI touching on those things is going to be highly sensitive. And I think, you know, one of the challenges that most financial institutions have is they don't know where all this data is, right? Or they don't have controls around that data. So I think that's, you know, again, that's part of the challenge is as much as they're, you know, every financial institution is going out there saying, we're going to be leveraging AI extensively. And whether they are or not remains to be seen. There is potential regulatory issues with saying that and not actually doing it, which is, I think, somewhat new. And I think just, again, as we sort of talked about this, is are the financial institutions really prepared for this level of change that's going on? And I think that's one of the challenges that we're seeing, is that, in essence, they're not built for this, right? And Howard, you're seeing it on the procurement side a lot as they're starting to purchase this. Therese and I are seeing it on the governance side as they try to implement this, and they're just not ready because of the risks involved to actually fully implement or use some of these technologies. Therese: So then what are they doing? What do we see the financial services industry doing to kind of approach the management governance of AI in the current environment? Howard: Well, I can answer that from an operational perspective before we go into a government's perspective. From an operational perspective, it's what Anthony was alluding to, which is banks cannot keep up with the pace of innovation. And therefore, they need to look out into the market for technological solutions that advance them over their competitors. And when they're all looking at AI, they're all clambering over each other to look at the best solutions to procure and implement into their organizations. We're seeing a lot of interest from banks at buying AI systems from third-party providers. From a regulatory landscape, that draws in a lot of concern because there are existing regulations in the US, in the UK and EU around how you control your supply chain and make sure that you manage your organization responsibly and faithfully with adequate risk management systems, which extends all the way out to your reliance on third party vendors. And so the way that we're seeing banks implement these risk management systems in the context of procurement is through contracts. And that's what we get in a lot. How do they govern the purchasing of AI systems into their organization from third-party vendors? And to what extent can they legislate against everything? They can't. And so the contracts have to be extremely fit for purpose and very keenly focused on the risks that AI, when deployed within their business, and this is all very novel. And this, for my practice, is the biggest challenge I'm seeing. Once they deploy it into the organization, that's where Anthony and Therese, I'll pass it back to you. Anthony: Yeah. And I think, Howard, I think one of the things that we're seeing instead of the consequence here is that oftentimes, and this is one of the challenges, is really a lot of the due diligence in terms of how does the tool work? How will it be implemented? Should be done before the contracting? I think that's one of the things that we're seeing. When does the due diligence come in? We're seeing it a lot. They contract it already. Now we're doing the due diligence, testing it and the like. And I think that's one of the challenges I think that we're going to be seeing. I think one of the things, just from a governance perspective, and this is probably the biggest challenge, is just when you think about governance and hopefully you have a committee, I think a lot of organizations have some type of committee or whatever that's sort of reviewing this. I think one of the things that we've seen and where these committees and governance is failing is that it's not accounting for everything. It's going to the committee and they're signing off on data use, for example, and saying, okay, this type of data is appropriate use or it's not training the model and stuff like that, which are very high level and very important topics to cover. But it's not everything. And I think one of the things that we're seeing from a governance perspective is where do you do the due diligence? Where do you get the transparency? You could have a contractual relationship and say that the tool works a certain way. It's only doing this. But are we just going to rely on representations? Or are we actually going to do the due diligence, asking the questions, really probing, figuring out the settings, all of that? The earlier you do that, the better. Frankly, a lot of it, if it was before the contract, would be better because then if you find some certain risks, the contracts can sort of reflect those risks. So that's one of, I think, the governance challenges we have as we move forward here. And also, as I talked about earlier, sometimes the contracts are already set and then they put in an AI feature. That often is another gap that we're seeing in a lot of organizations where they may have third-party governance on the procurement side and they have contracts and the like, but then they don't really have governance on new features coming in on a contract that's in existence already, and then you have to go back. And again, the ideal situation is you'd have, if they had that, you'd go back and look at the contract and say, do we need the amendment contract? Probably should, to account for the fact that you're now using AI. So those are some of the, I think, some of the governance challenges that we've been seeing. Therese: But I do think what's interesting is that we are seeing financial services work to put in place more comprehensive governance structures for AI, which is a new thing, right? As Anthony's saying, we are seeing committees or working groups formed that are responsible for reviewing AI use within the organization. We are seeing folks trying to structure or update their third-party governance mechanisms to route applications that may have an AI feature to review. We are seeing folks trying to bring in the appropriate personnel. So sometimes, as Anthony's saying, they're not perfect yet. They're only focused on data use or IP. But we are more and more seeing people pull in compliance and legal and other personnel to focus on governance items. We're seeing greater training, real training of users, a lot of heavy focus on user training, appropriate use, appropriate use cases in terms of the use of AI, greater focus on the data that's being used, and how do they put controls in place, which is challenging right now, but to minimize the use of AI on highly confidential information, or if it is being used, to have appropriate safeguards in place. And so I think what's interesting with AI that's different than what we've seen with other types of emerging technologies is that both the regulators and the financial services industry are looking toward putting in place more comprehensive strategies, guidance, controls around the use of AI. It isn't perfect yet. It's not there yet. There's a lot of sort of trial and error in development. But I think it's interesting that with AI, we are seeing kind of a coalescence around an attempt. To have greater management, oversight, governance around the use of the technology, which isn't something, frankly, that we've seen necessarily in a wide scale with other types of emerging technologies, which of course are happening in the financial services industry all the time. Anthony: Yeah. And I think just to highlight this, right, it starts with the contract, as Howard said, because that's the way you start. So once you do the contracting, testing and validation is critical. And I think that's one of the things that I think a lot of organizations are dealing with because they want to understand the model. There's not a lot of transparency around how the model works. That's just the way it is. So you have to do the testing and validation. And that's the due diligence that I was talking about before. And then documenting decisions, right? So what we're seeing is you've got a governance council. You have to make sure the contract's there. You're doing the testing and validation. All of this is documented, right? To me, that's the most important thing because when the regulators come and say, how do you deal with this, you've got to have documentation that say, here's the way we're deploying AI in organizations. Here's the documentation that shows we're doing it the right way. We're testing it. We're validating it. We have good contracts in place. All of that is, I think, critical. Again, I think the biggest challenge is the scale. This is moving so quickly that you probably have to prioritize. I think this goes back to the data use that we were talking about before, is you probably should be focusing on those AI tools that are really customer facing, that are dealing with material non-public information. If it's dealing with sensitive personal information and it's dealing with CSI, and that becomes a data governance issue. Figuring out, okay, what are the systems where I'm going to employ AI that touch upon these high risk areas, that probably should be where the priorities are. That's where we've seen a lot of concern. If you don't have your data governance in place and you know which tools have highly sensitive information, it's really hard to have a governance structure around AI. So that's where, again, we're seeing a lot of financial institutions playing catch up. Therese: All right. Well, thanks for that, Anthony. And thanks to Howard as well. I think we have maybe barely scratched the surface of AI in banking. But thanks to everyone for joining us today. And please do join us for our next episode in our AI series. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

1 AI explained: AI and the impact on medical devices in the EU 24:37
24:37
Play Later
Play Later
Lists
Like
Liked24:37
Regulatory lawyers Cynthia O’Donoghue and Wim Vandenberghe explore the European Union’s newly promulgated AI Act; namely, its implications for medical device manufacturers. They examine amazing new opportunities being created by AI, but they also warn that medical-device researchers and manufacturers have special responsibilities if they use AI to discover new products and care protocols. Join us for an insightful conversation on AI’s impact on health care regulation in the EU. ----more---- Transcript: Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with everyday. Cynthia: Welcome to Tech Law Talks and our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we're going to focus on AI and life sciences, particularly medical devices. I'm Cynthia O’Donoghue. I'm a Reed Smith partner in the London office in our emerging technology team. And I'm here today with Wim Vandenberghe. Wim, do you want to introduce yourself? Wim: Sure, Cynthia. I'm Wim Vandenberghe, I'm a life science partner out of the Brussels office, and my practice is really about regulatory and commercial contracting in the life science space. Cynthia: Thanks, Wim. As I mentioned, we're here to talk about the EU AI Act that came into force on the 2nd of August, and it has various phases for when different aspects come into force. But I think a key thing for the life sciences industry and any developer or deployer of AI is that research and development activity is exempt from the EU AI Act. And the reason it was done is because the EU wanted to foster research and innovation and development. But the headline sounds great. If, as a result of research and development, that AI product is going to be placed on the EU market and developed, essentially sold or used in products in the EU, it does become regulated under the EU AI Act. And there seems to be a lot of talk about interplay between the EU AI Act and various other EU laws. So Wim, how does the AI Act interplay with the medical devices regulation, the MDR and the IVDR? Wim: That's a good point, Cynthia. And that's, of course, you know, where a lot of the medical device companies are looking at kind of like that interplay and potential overlap between the AI Act on the one hand, which is a cross-sectoral piece of legislation. So it applies to all sorts of products and services, whereas the MDR and the IVDR are of course only applicable to medical technologies. So in summary, you know, the medical, both the AI Act and the MDR and IVDR will apply to AI systems, provided, of course, that those AI systems are in scope of the respective legislation. So maybe I'll start with the MDR and IVDR and then kind of turn to the AI Act. Under the MDR and the IVDR, of course, there's many AI solutions that are either considered to be a software as a medical device in their own right, or they are part or component of a medical technology. So to the extent that this AI system as software meets the definition of a medical device under the MDR or under the IVDR, it would actually qualify as a medical device. And therefore, the MDR and IVDR is fully applicable to those AI solutions. Stating the obvious, you know, there's plenty of AI solutions that are already now on the market and being used in a healthcare setting as well. Well, what the AI Act kind of focuses on, particularly with regard to medical technology, is the so-called high-class risk AI systems. And for a medical technology to be a high-class AI system under the AI Act, it's essentially it's a twofold kind of criteria that needs to apply. First of all, the AI solution needs to be a medical device or an in vitro diagnostic under the sector legislation, so the MDR or the IVDR, or it is a safety component of such a medical product. Safety component is not really explained in the AI Act, but think about, for example, the failure of an AI system to interpret diagnostic IVD instrument data, for example, that could endanger the health of a person by generating false positives. That would be a safety component. So that's the first step you have to see is the AI solution, does it qualify as a medical device or is it a safety component of a medical device? And the second step is that it is only for AI solution that are actually undergoing a conformity assessment by a notified body under the MDR or the IVDR. So to make kind of a long story short, it actually means that medical devices that are either a class 2A, 2B, or 3 will be in the scope of the AI Act. And for the IVDR, for in vitro diagnostics, that would be class B to D. The risk class, that would be then captured by the AI Act. So that essentially is kind of like determining the scope and the applicability of the AI Act. And Cynthia, maybe coming back to an earlier point of what you said on research, I mean, the other kind of curious thing as well that the AI Act doesn't really kind of foresee is the fact that, of course, you know, for getting an approved medical device, you need to do certain clinical investigations and studies on that medical device. So you really have to kind of test it in a real world setting. And that happens by a clinical trial, clinical investigation. The MDR and the IVDR have elaborate kind of rules about that. And the very fact that you do this prior to getting your CE mark and your approval and then launching it on the market is very standard under the MDR and the IVDR. However, under the AI Act, which also requires CE marking and approval, and we'll come to that a little bit later, there's no mentioning about such clinical and performance evaluation of medical technology. So if you would just read the AI Act like that, it would mean actually that you need to have a CE mark for such a high-risk AI system, and only then you can do your clinical assessment. And of course, that wouldn't be consistent with the MDR and the IVDR. And we can talk a little bit later about consistency between the two frameworks as well. You know, the one thing that I do see as being very new under the AI Act is everything to do around data and data governance. And I'm just, you know, kind of question, Cynthia, you know, given your experience, you know, if you can maybe talk a little bit about, you know, what are the requirements going to be for data and data governance under the AI Act? Cynthia: Thanks, Wim. Well, the AI Act obviously defers to the GDPR, and the GDPR, which regulates how data is used and transferred outside within the EEA member states and then transferred outside the EEA, all has to interoperate with the EU AI Act. In the same way as you were just saying that the MDR, the IVDR needs to interoperate, and you touched, of course, on clinical trials, so the clinical trial regulation would also have to work and interoperate with the EU AI Act. Obviously, if you're working with medical devices, most of the time it's going to involve personal data and what is called sensitive, a special category, data concerning health about patients or participants in a clinical trial. So, you know, a key part of AI is that training data. And so the data that goes in, that's ingested into the AI system for purposes of a clinical or for a medical device needs to be as accurate as possible. And obviously the GDPR also includes a data minimization principle. So the data needs to be the minimum necessary, but at the same time. You know, that training data, you know, depending on the situation in a clinical trial might be more controlled. But once a product is put into the market, there could be data that's ingested into the AI system that has anomalies in it. You know, you mentioned about false positives, but there's also a requirement under the AI to ensure that the ethical principles in AI, which was non-binding by the EU, are adhered to. And one of those is human oversight. So obviously, if there's anomalies in the data and the outputs from the AI would give false positives or create other issues with the output that EU AI Act requires once a CE mark is obtained, just like the MDR does, for there to be a constant conformity assessment to ensure that any kind of anomalies and or the necessity for human intervention is met. Is done on a regular basis as part of reviewing the AI system itself. So we've talked about high-risk AI. We've talked a little bit about the overlap between the GDPR and the EU AI Act and the MDR and the IBDR overlap and interplay. Let's talk about some real-world examples, for instance. I mean, the EU AI Act also classes education as potentially high risk if any kind of vocational training is based solely on assessment by an AI system. How does that potentially work with the way medical device organizations and pharma companies might train clinicians? Wim: It's a good question. I mean, normally, you know, those kind of programs, they would typically not be captured, you know, by the definition of a medical device, you know, through the MDR. So they'd most likely be out of scope, unless it is programs that are actually kind of extending also to a real life kind of diagnosis or cure or treatment kind of, you know, helping the physician, I mean, to make their own decision. But if it's really about kind of training, it normally would fall out of scope. And that'd be very different right here with the AI Act, actually, it would be kind of captured, it would be qualified as a high risk class. And what it would mean is that maybe different from a medical device, you know, manufacturer that would be very kind of used to a lot of the concepts that are used in the AI Act as well. And we'll come to that a little bit later. You know, manufacturers or developers of this kind of software solution and all, they wouldn't necessarily be sophisticated kind of in the medical technology space in terms of having a risk and a quality management system. Having your technical documentation verified, et cetera, et cetera. So I do think that's one of those examples where there could be a potential, like a bit of a mismatch between the two. You will have to see, of course, for a number of these obligations in relation to specific AI systems under the AI Act, whether it's high class or the devices that you mentioned, Cynthia, you know, which is more, I think, in Annex 3 of the AI Act, the European Commission is going to produce a lot of delegated acts and guidance documents. So it is to be seen, actually, what the Commission is going to kind of provide more in detail about this. Cynthia: Thanks, Wim. I mean, we've talked a lot about high-risk AI, but the EU AI Act also regulates general-purpose AI, you know, and so chatbots and those kinds of things are regulated, but in a more minimum way under the EU AI Act. What if a pharma company, a medical device company has a chatbot on its website for customer service? Obviously, there's risks in relation to data and people inputting sensitive personal data, but there's got to be a risk in relation as well as to users of those chatbots seeking to use that system to triage or ask questions seeking medical device. How would that be regulated? Wim: Yeah, that's a good point. I mean, it would ultimately be down to the intended purpose of that chat box. If that chat box is really about just connecting the patient or the user with maybe a physician and then take it forward. Or would it be a chat box that actually is also functioning, say, more as a kind of a triage system where the chat box, depending on the input given by the user or the answers given, would start actually kind of making their own decisions and would, you know, already kind of like point towards a certain decision, whether a cure or whether treatment is required, etc. So that would be already much more, again, in the space of the medical device definition, whereas a kind of, a general use chat box would not necessarily be but it really is kind of down to the intended purpose of of the chat box the one thing that is of course specific with an AI system versus a more kind of standard software or chat box system is that the AI kind of learning continuous learning, may actually go kind of beyond and above the intended purpose of what was initially initially envisaged for that chat box. And that might have been influenced, like you say, Cynthia, about the input. Maybe because the user is asking different questions, the chat box may react different and may actually go beyond the intended purpose. And that's really, I think, that's going to be a very difficult point of discussion, in particular with notified bodies, in case you need to have your AI system assessed by a notified body. Under the MDR and the IVDR, a lot of the notified bodies have gone on record saying that that kind of the process of continuous learning by an AI system, of course, entails a certain risk ultimately. And to the extent that a medical device manufacturer has not described like almost like in a what within a certain boundaries, you know, that the AI system can operate, that would actually mean that it goes beyond the approval and would need to be reassessed and re kind of confirmed by the notified body. So I think that's going to be something and it's really not clear under the AI Act, there's a certain idea about change, you know, to what extent if the AI system learns and changes, do you need to seek new approval and conformity assessment? And that change doesn't necessarily correspond with what is considered to be a significant change. That's the wording being used under the MDR. That doesn't necessarily correspond, again, between the two frameworks here as well. And maybe one point, Cynthia, that I, on the chat box, you know, because it is, you know, it wouldn't necessarily qualify as a high risk, but there are certain requirements under the AI Act, right? It's about, if I understand well, a lot about transparency, that you're transparent, right? Cynthia: Mm-hmm. So anyone who interacts with a chatbot needs to be aware that that's what they're interacting with. So you're right, there's a lot about transparency and also explainability. But I think you're starting to get towards something about reconformity. I mean, if, you know, Notified Body has certified the use, whether it be a medical device or, you know, AI, if there's anomalies in that data or if it starts doing new things, potentially what is off-label use, surely that would trigger a new requirement for a new conformity assessment. Wim: Yeah, absolutely. I think, you know, under the MDR, it wouldn't necessarily, even though with the MDR now, there's language that you need to report. If you see like a trend of off-label use, you need to report that. Under the AI Act, it's really down to what kind of change can you tolerate? And an off-label use, you know, by definition is, you know, kind of using the device for other purposes, for other intended purposes than the developer had in mind. And that, you know, again, if you're reading the AI Act strictly, you know, that would probably then indeed trigger a new conformity assessment procedure as well. Cynthia: One of the things that we haven't talked about, and we've kind of alluded through that, what we've been discussing about off-label use and anomalies in the data, is that the EU is talking about essentially a separate AI liability, which is still in draft. So I would have thought that you know medical device manufacturers would need to be very cognizant that you know there's potential for increased liability under the AI act you've got liability under the MDR obviously the GDPR has its own penalty system so I mean you this will require quite a lot of governance to to try and minimize risk. Wim: Yeah would you oh Oh, absolutely. Oh, absolutely. I think, I mean, you touched on a very, very good point. I think I wouldn't say that, you know, in Europe, we're moving entirely to a kind of claimant-friendly or, you know, class action-friendly litigation landscape. But, you know, there are a lot of new laws being put in place that might actually trigger that a bit. You know, you mentioned rightfully the AI liability directive that is still in draft stage but you already have the new general safety product regulation place you have the class action directive as well you have the whistleblower directive being you know rolled out in all the member states so i think all of that combined surely and then certainly with AI systems does create increased risk and certain risks you know, a medical technology company will be very familiar with, you know, if we're talking about risk management, quality management, the drafting of the technical documentation. All labeling, document keeping, adverse event reporting, all of that is well known. But what is less well known is a bit, you know, the use cases that we discussed, but also the potential, the overlap and the potential inconsistencies between the different legal systems, especially on data and data governance. I don't think, you know, a lot of medical technology companies are so advanced yet. And then we can tell already now, you know, when a medical device that incorporates, you know, AI software is being certified, there are some questions, you know, there's some language in the MDR about software and kind of continuous software along the lifecycle of the software compliance. Compliance, but it's not at all prescriptive as what will happen now with the AI Act, where you'll have a lot more requirements on like data quality, you know, what were the data sets that you have been used to train the algorithm? Can we have access to it, et cetera, et cetera. You need to disclose that. That's certainly a big risk area. The other risk area that I would see, and again, that differs maybe a bit from the MDR are is that it's not just about, under the AI Act, imposing requirements on the developers, which essentially are the manufacturers. But it's also on who are called the deployers. And the deployers are essentially the users. You know, that could be hospitals or physicians or patients. And there are also requirements now being imposed on them. And I do think that's a novelty to some extent as well. So that will be curious on how they deal with that, how medical device companies, you know, with their customers, with their doctors and hospitals are going to interact to guarantee kind of a continuous compliance, not just with the MDR, but now also with the AI act. Cynthia: Thanks, Wim. That's a lot for organizations to think about, because those things weren't complicated enough under the MDR itself. I think some of the takeaways are obviously the interplay between the MDR, the IVDR, the EU Act, concerns around software as a medical device, and the overlap with what is an AI system, which is obviously the potential for inferences and generating of outputs. And then concerns around transparency, being able to be open about how they're using AI and explaining its use. We also talked a little bit about some of the risks in relation to clinician education, off-label use, anomalies within the data, and the potential for liability. So, please feel free to get in touch if you have any questions after listening to this podcast. Thank you, Wim. We appreciate your listening, and please be aware that we will have more podcasts on AI over the coming months. Thanks again. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
T
Tech Law Talks

Reed Smith emerging tech lawyers Andy Splittgerber in Munich and Cynthia O’Donoghue in London join entertainment & media lawyer Monique Bhargava in Chicago to delve into the complexities of AI governance. From the EU AI Act to US approaches, we explore common themes, potential pitfalls and strategies for responsible AI deployment. Discover how companies can navigate emerging regulations, protect user data and ensure ethical AI practices. ----more---- Transcript: Intro: Hello and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day. Andy: Welcome to Tech Law Talks and our new series on artificial intelligence. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape globally. Today, we'll focus on AI and governance with a main emphasis on generative AI in a regional perspective if we look into Europe and the US. My name is Andy Splittgerber. I'm a partner in the Emerging Technologies Group of Reed Smith in Munich, and I'm also very actively advising clients and companies on artificial intelligence. Here with me, I've got Cynthia O'Donoghue from our London office and Nikki Bhargava from our Chicago office. Thanks for joining. Cynthia: Thanks for having me. Yeah, I'm Cynthia O'Donoghue. I'm an emerging technology partner in our London office, also currently advising clients on AI matters. Monique: Hi, everyone. I'm Nikki Bhargava. I'm a partner in our Chicago office and our entertainment and media group, and really excited to jump into the topic of AI governance. So let's start with a little bit of a basic question for you, Cynthia and Andy. What is shaping how clients are approaching AI governance within the EU right now? Cynthia: Thanks, Nikki. The EU is, let's say, just received a big piece of legislation, went into effect on the 2nd of October that regulates general purpose AI and high risk general purpose AI and bans certain aspects of AI. But that's only part of the European ecosystem. The EU AI Act essentially will interplay with the General Data Protection Regulation, the EU's Supply Chain Act, and the latest cybersecurity law in the EU, which is the Network and Information Security Directive No. 2. so essentially there's a lot of for organizations to get their hands around in the EU and the AI act has essentially phased dates of effectiveness but the the biggest aspect of the EU AI act in terms of governance lays out quite a lot and so it's a perfect time for organizations to start are thinking about that and getting ready for various aspects of the AAC as they in turn come into effect. How does that compare, Nikki, with what's going on in the U.S.? Monique: So, you know, the U.S. is still evaluating from a regulatory standpoint where they're going to land on AI regulation. Not to say that we don't have legislation that has been put into place. We have Colorado with the first comprehensive AI legislation that went in. And we also had, you know, earlier in the year, we also had from the Office of Management and Budget guidelines to federal agencies about how to procure and implement AI, which has really informed the governance process. And I think a lot of companies in the absence of regulatory guidance have been looking to the OMB memo to help inform what their process may look like. And I think the one thing I would highlight, because we're sort of operating in this area of unknown and yet-to-come guidance, that a lot of companies are looking to their existing governance frameworks right now and evaluating how they're both from a company culture perspective, a mission perspective, their relationship with consumers, how they want to develop and implement AI, whether it's internally or externally. And a lot of the governance process and program pulls guidance from some of those internal ethics as well. Cynthia: Interesting, so I’d say somewhat similar in the EU, but I think, Andy, the consumer, I think the US puts more emphasis on, consumer protection, whereas the EU AI Act is more all-encompassing in terms of governance. Wouldn't you agree? Andy: Yeah, that was also the question I wanted to ask Nikki, is where she sees the parallels and whether organizations, in her view, can follow a global approach for AI are ai governance and yes i like for the for the question you asked yes i mean the AI act is the European one is more encompassing it is i'm putting a lot of obligations on developers and deployers like companies that use ai in the end of course it also has the consumer or the user protection in the mind but the rules directly rated relating to consumers or users are I would say yeah they're limited. So yeah Nikki well what what's kind of like you always you always know US law and you have a good overview over European laws what is we are always struggling with all the many US laws so what's your thought can can companies in terms of AI governance follow a global approach? Monique: In my opinion? Yeah, I do think that there will be a global approach, you know, the way the US legislates, you know, what we've seen is a number of laws that are governing certain uses and outputs first, perhaps because they were easier to pass than such a comprehensive law. So we see laws that govern the output in terms of use of likenesses, right, of publicity violations. We're also seeing laws come up that are regulating the use of personal information and AI as a separate category. We're also seeing laws, you know, outside of the consumer, the corporate consumer base, we're also seeing a lot of laws around elections. And then finally, we're seeing laws pop up around disclosure for consumers that are interacting with AI systems, for example, AI powered chatbots. But as I mentioned, the US is taking a number of cues from the EU AI Act. So for example, Colorado did pass a comprehensive AI law, which speaks to both obligations for developers and obligations to deployers, similar to the way the EU AI Act is structured, and focusing on what Colorado calls high risk AI systems, as well as algorithmic discrimination, which I think doesn't exactly follow the EU AI Act, but draws similar parallels, I think pulls a lot of principles. That's the kind of law which I really see informing companies on how to structure their AI governance programs, probably because the simple answer is it requires deployers at least to establish a risk management policy and procedure and an impact assessment for high risk systems. And impliedly, it really requires developers to do the same. Because developers are required to provide a lot of information to deployers so that deployers can take the legally required steps in order to deploy the AI system. And so inherently, to me, that means that developers have to have a risk management process themselves if they're going to be able to comply with their obligations under Colorado law. So, you know, because I know that there are a lot of parallels between what Colorado has done, what we see in our memo to federal agencies and the EU AI Act, maybe I can ask you, Cynthia and Andy, to kind of talk a little bit about what are some of the ways that companies approach setting up the structure of their governance program? What are some buckets that it is that they look at, or what are some of the first steps that they take? Cynthia: Yeah, thanks, Nikki. I mean, it's interesting because you mentioned about the company-specific uses and internal and external. I think one thing, you know, before we get into the governance structure or maybe part of thinking about the governance structure is that for the EU AI Act, it also applies to employee data and use of AI systems for vocational training, for instance. So I think in terms of governance structure. Certainly from a European perspective, it's not necessarily about use cases, but about really whether you're using that high risk or general purpose AI and, you know, some of the documentation and certification requirements that might apply to the high risk versus general purpose. But the governance structure needs to take all those kinds of things into account. Account so you know obviously guidelines and principles about the you know how people use external AI suppliers how it's going to be used internally what are the appropriate uses you know obviously if it's going to be put into a chatbot which is the other example you used what are rules around acceptable use by people who interact with that chatbot as well as how is that chatbot set up in terms of what would be appropriate to use it for. So what are the appropriate use cases? So, you know, guidelines and policies, definitely foremost for that. And within those guidelines and policies, there's also, you know, the other documents that will come along. So terms of use, I mentioned acceptable use, and then guardrails for the chatbot. I mean, I mean, one of the big things for EU AI is human intervention to make sure if there's any anomalies or somebody tries to game it, that there can be intervention. So, Andy, I think that dovetails into the risk management process, if you want to talk a bit more about that. Andy: Yeah, definitely. I mean, the risk management process in the wider sense, of course, like how do organizations start this at the moment is first setting up teams or you know responsible persons within the organization that take care of this and we're gonna discuss a bit later on how that structure can look like and then of course the policies you mentioned not only regarding the use but also how to or which process to follow when AI is being used or even the question what is AI and how do we at all find out in our organization where we're using AI and what is an AI system as defined under the various laws, also making sure we have a global interpretation of that term. And then that is a step many of our clients are taking at the moment is like setting up an AI inventory. And that's already a very difficult and tough step. And then the next one is then like per AI system that is then coming up in this register is to define the risk management process. And of course, that's the point where in Europe, we look into the AI Act and look what kind of AI system do we have, high risk or any other sort of defined system. Or today, we're talking about the generative AI systems a bit more. For example, there we have strong obligations in the European AI Act on the providers of such generative AI. So less on companies that use generative AI, but more on those that develop and provide the generative AI because they have the deeper knowledge on what kind of training data is being used. They need to document how the AI is working and they need to also register this information with the centralized database in the European Union. They also need to give some information on copyright protected material that is contained in the training data so there is quite some documentation requirements and then of course so logging requirements to make sure the AI is used responsibly and does not trigger higher are risks. So there's also two categories of generative AI that can be qualified. So that's kind of like the risk management process under the European AI Act. And then, of course, organizations also look into risks into other areas, copyright, data protection, and also IT security. Cynthia, I know IT security is one of the topics you love. You add some more on IT security here and then we'll see what Nikki says for the US. Cynthia: Well, obviously NIST 2.0 is coming into force. It will cover providers of certain digital services. So it's likely to cover providers of AI systems in some way or other. And funny enough, NIST 2.0 has its own risk management process involved. So there's supply chain due diligence involved, which would have to be baked into a risk management process for that. And then the EU's ENISA, Cybersecurity Agency for the EU, has put together a framework for cybersecurity, for AI systems, dot dot binding. But it's certainly a framework that companies can look to in terms of getting ideas for how best to ensure that their use of AI is secure. And then, of course, under NIST, too, the various C-Certs will be putting together various codes and have a network meeting late September. So we may see more come out of the EU on cybersecurity in relation to AI. But obviously, just like any kind of user of AI, they're going to have to ensure that the provider of the AI has ensured that the system itself is secure, including if they're going to be putting trained data into it, which of course is highly probable. I just want to say something about the training data. You mentioned copyright, and there's a difference between the EU and the UK. So in the UK, you cannot use, you know, mine data for commercial purposes. So at one point, the UK was looking at an exception to copyright for that, but it doesn't look like that's going to happen. So there is a divergence there, but that stems from historic UK law rather than as a result of the change from Brexit. Nikki, turning back to you again, I mean, we've talked a little bit about risk management. How do you think that that might differ in the US and what kind of documentation might be required there? Or is it a bit looser? Monique: I think there are actually quite a bit of similarities that I would pull from what, you know, we have in the EU. And Andy, I think this goes back to your question about whether companies can establish a global process, right? In fact, I think it's going to to be really important for companies to see this as a global process as well. Because AI development is going to happen, you know, throughout the world. And it's really going to depend on where it's developed, but also where it's deployed, you know, and where the outputs are deployed. So I think taking a, you know, broader view of risk management will be really important in the the context of AI, particularly given. That the nature of AI is to, you know, process large swaths of information, really on a global scale, in order to make these analytics and creative development and content generation processes faster. So that just a quick aside of I actually think what we're going to see in the US is a lot of pulling from what we've seen that you and a lot more cooperation on that end. I agree that, you know, really starting to frame the risk governance process is looking at who are the key players that need to inform that risk measurement and tolerance analytics, that the decision making in terms of how do you evaluate, how do you inventory. Evaluate, and then determine how to proceed with AI tools. And so, you know, one of the things that I think makes it hopefully a little bit easier is to be able to leverage, you know, from a U.S. Perspective, leverage existing compliance procedures that we have, for example, for SEC compliance or privacy compliance or, you know, other ethics compliance programs. Brands and make AI governance a piece of that, as well as, you know, expand on it. Because I do think that AI governance sort of brings in all of those compliance pieces. We're looking at harms that may exist to a company, not just from personal information, not just from security. Not just from consumer unfair deceptive trade practices, not just from environmental, standpoints, but sort of the very holistic view of not to make this a bigger thing than it is, but kind of everything, right? Kind of every aspect that comes in. And you can see that in some of the questions that developers are supposed to be able to answer or deployers are supposed to be able to answer in risk management programs, like, for example, in Colorado, right, the information that you need to be able to address in a risk management program and an impact assessment really has to demonstrate an understanding of, of the AI system, how it works, how it was built, how it was trained, what data went into it. And then what are the full, what is the full range of harms? So for example, you know, the privacy harms, the environmental harms, the impact on employees, the impact on internal functions, the impact on consumers, if you're using it externally, and really be able to explain that, whether you have to put out a public statement or not, that will depend on the jurisdiction. But even internally, to be able to explain it to your C-suite and make them accountable for the tools that are being brought in, or make it explainable to a regulator if they were to come in and say, well, what did you do to assess this tool and mitigate known risks? So, you know, kind of with that in mind, I'm curious, what steps do you think need to go into a governance program? Like, what are one of the first initial steps? And I always feel that we can sort of start in so many different places, right, depending on how a company is structured, or what initial compliance pieces are. But I'm curious to know from you, like, Like, what would be one of the first steps in beginning the risk management program? Cynthia: Well, as you said, Nikki, I mean, one of the best things to do is leverage existing governance structures. You know, if we look, for instance, into how the EU is even setting up its public authorities to look at governance, you've got, as I've mentioned, you know, kind of at the outset, you've almost got a multifaceted team approach. And I think it would be the same. I mean, the EU anticipates that there will be an AI officer, but obviously there's got to be team members around that person. There's going to be people with subject matter expertise in data, subject matter expertise in cyber. And then there will be people who have subject matter expertise in relation to the AI system itself, the data, training data that's been used, how it's been developed, how the algorithm works. Whether or not there can be human intervention. What happens if there are anomalies or hallucinations in the data? How can that be fixed? So I would have thought that ultimately part of that implementation is looking at governance structure and then starting from there. And then obviously, I mean, we've talked about some of the things that go into the governance. But, you know, we have clients who are looking first at use case and then going, okay, what are the risks in relation to that use case? How do we document it? How do we log it? How do we ensure that we can meet our transparency and accountability requirements? You know, what other due diligence and other risks are out there that, you know, blue sky thinking that we haven't necessarily thought about. Andy, any? Andy: Yeah, that's, I would say, one of the first steps. I mean, even though not many organizations allocate now the core AI topic in the data protection department, but rather perhaps in the compliance or IT area, still from the governance process and starting up that structure, we see a lot of similarities to the data protection. Protection GDPR governance structure and so yeah I think back five years to implementation or getting ready for GDPR planning and checking what what other rules we we need to comply with who knew do we need to involve get the plan ready and then work along that plan that's that's the phase where we see many of our clients at the moment. Nikki, more thoughts from your end? Monique: Yeah, I think those are excellent points. And what I have been talking to clients about is sort of first establishing the basis of measurement, right, that we're going to evaluate AI development on or procurement on. What are the company's internal principles and risk tolerances and defining those? And then based off of those principles and those metrics, putting together an impact assessment, which borrows a lot from what, you know, from what you both said, it borrows a lot from the concept of impact assessments under privacy compliance, right? Right, to implement the right questions and put together the right analytics in order to measure whether a AI tool that's in development is meeting up to those metrics, or something that we are procuring is meeting those metrics, and then analyzing the risks that are coming out of that. I think a lot of that, the impact assessment is going to be really important in helping make those initial determinations. But also, you know, and this is not just my feeling, this is something that is also required in the Colorado law is setting up an impact assessment, and then repeating it annually, which I think is particularly important in the context of AI, especially generative AI, because generative AI is a learning system. So it is going to continue to change, There may be additional modifications that are made in the course of use that is going to require reassessing, is the tool working the way it is intended to be working? You know, what has our monitoring of the tool shown? And, you know, what are the processes we need to put into place? In order to mitigate the tool, you know, going a little bit off path, AI drift, more or less, or, you know, if we start to identify issues within the AI, how do we what processes do we have internally to redirect the ship in the right process. So I think impact assessments are going to be a critical tool in helping form what is the rest of the risk management process that needs to be in place. Andy: All right. Thank you very much. I think these were a couple of really good practical tips and especially first next steps for our listeners. We hope you enjoyed the session today and look forward if you have any feedback to us either here in the comment boxes or directly to us. And we hope to welcome you soon in one of our next episodes on AI, the law. Thank you very much. Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts. Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or established standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers. All rights reserved. Transcript is auto-generated.…
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.