chevron_left Back to all videos

The Generative AI Video-Series

What is Trustworthy AI?

00:00 Introduction 
00:53 What is Trustworthy AI? 
01:44 How we got there? Historical Context and Evolution of AI 
03:54 Trustworthy AI is Legal 
05:51 Trustworthy AI is Robust 
06:46 Trustworthy AI is Ethical 
08:58 Why should we care? 
14:29 How is the appliedAI Institute building Trustworthy AI? 
16:16 Conclusion and Final Thoughts 

In this episode, Till, Head of Trustworthy AI at the appliedAI Institute for Europe, explains what constitutes Trustworthy AI and its significance. He breaks down the three main pillars: legal compliance, technical robustness, and ethics. Till discusses the historical context of AI, the impact of Trustworthy AI on law, privacy, and industrial applications, and the importance of transparency and risk assessment. Additionally, he highlights the efforts of the Applied AI Institute to support and facilitate the implementation of Trustworthy AI through resources and active engagement with policymakers. Join us to understand what Trustworthy AI actually means, why should we care about it, and how it can be implemented effectively. Visit our website: https://www.appliedai-institute.de/en/ Don't forget to like, share, and subscribe for more insightful content!

The  next few minutes I will break down for you what is trustworthy AI and why should we care about it?

My name is Till. I am the head of trustworthy AI at the Applied AI Institute for Europe. In this video, we will talk about what is trustworthy AI.  We often hear politicians or the high level people say, we need trustworthy AI and it's so important, but What does it actually mean? Today, I want to help you to understand what exactly is TrustworthyAI, what are the components it's made of, why should we care about it, and also what are we doing at ApplyTI to help you getting there closer.

We can say TrustworthyAI consists of three main pillars. Number one, it needs to meet the law. Law exists, it's already around even before AI, and so we need to take that into account when developing and using AI systems.  Secondly, it needs to be robust from a technical standpoint. As with every other product, it needs to simply do what it's supposed to do.

And thirdly, it needs to be ethical. Because we know people are interacting with those AI systems. They help us to make decisions or take actions. And so, eventually, they will affect people out there in many different ways. But before we break down those three main components, let's have a look. Take a step back for a moment and think about how did we actually get here?

AI has been around for quite a number of decades, actually. And we've seen so called hype cycles. These were on the one side periods where people were really excited about AI. There were lots of investments, lots of excitement. And they were usually followed by those, yeah, AI winters. These were the moments when certain expectations were actually not fulfilled.

And so investments and interest went down. And after a couple of such cycles,  around 2011, 2012, when things like image detection or text were some major breakthroughs. And since then it went all the way up. And of course, the latest hype came with those large models, such as JetGBT, also known as generative AI.

Over this period of time, it was clearly noticeable that AI is not a niche. It was really clear this technology is here to stay, and it's going into all the different areas of life and industry. And because of those new capabilities, it was clear that existing law comes to a limitation. Because AI  is really quite different from what we already know.

AI has certain levels of autonomy. It can do things by itself if we allow it to do. And it's useful to do it because it gives us huge benefits. But with that often questions come around, whose fault is it? If something goes wrong. What happened is that there were societal concerns about it. By now we all have AI in our phones, in our cars, of course online and social media and elsewhere.

And reports have come up where things actually got wrong and really caused damage. Not only in terms of some autonomous car hitting someone and someone's being injured, but we've seen also other types of damage more in the space of the fundamental human rights, where many, many people got discriminated against, maybe because of their gender, maybe because of their socioeconomic status.

So with this new technology also came a new type of damage, meaning we need to care about the fact that AI is used, but in a responsible and trustworthy fashion. That's why trustworthy AI is such a big thing. Not only for the politicians, but also for every practitioner using and implementing it.

In this first part, I want to talk about the question of legal compliance. Let's imagine a use case. We are at a factory and we have this big machine. It has moving parts there. Maybe it's a big drill. Maybe there's some industrial robot putting some stuff together. But we have this big machine and it's equipped with an AI system that is used for predictive maintenance.

We have this big machine and it's equipped with an AI system that is used for predictive maintenance. So the system is supposed to tell us when this machine might break, when it might need a maintenance, and it does so maybe in different ways. Suppose there's a sensor that picks up the vibration, and then if the system gives an alert, someone will come and fix it.

Now when we look at this from a more of a legal standpoint, we can ask about the question of liability.  Suppose the AI system has a false positive or false negative, meaning this machine is about to break, but this AI system does not give an alert. The machine breaks, now what? Now, if the assembly line stands still, it has a huge economic impact for the company.

That's a problem. Whose fault is it if the machine gets it wrong? And this is one part where liability law should help us to map out in advance who is responsible for what in this value chain where AI is used, maybe developed from one party and used by another. Another point is about privacy. We could imagine that this machine is equipped with some sort of an access control.

So when I want to open the hood, that only certain people can do it. And maybe in the past this was done by some kind of a key card. Now it's done by a face scan of the operator. So now we have biometric image of individuals. So here we have law like the GDPR, helps us to make sure that any personal data that is collected is also treated in a rightful way.

So until here, existing law continues to apply. And if we're using AI, we need to take that into account.

The second part of Trustworthy AI is about robustness from a technical standpoint. So we continue to consider our use case, which is predictive maintenance. There's a couple of things that can happen, which potentially disturb the system and make it not to work as intended. So it could be, for instance, that there's something wrong with the input data.

We said it's using certain sensors to detect vibration from the machine. And if this vibration pattern is changing, it should say, ah, maybe there's something wrong and someone needs to look into it. Now, it could be that there's another machine standing right next to it, which is also vibrating and which sort of overshadows this signal that it gets in.

But what will our AI system do? Will it give a false alert or no alert? Or will it actually be robust enough to handle that, to detect there was an error?

The third pillar of trustworthy AI is ethics. Ethics is probably the most fuzzy term of those three, right? Because we all know what is a law. We all get an idea of how does something work properly or not. But ethics With this whole ethics debate, I often get the sense that people are a little bit insecure about what does it actually mean.

I find it useful to think about the values of the people who are in touch with the system, who are affected by the system. What is it that is important to them? And to take that into account when you design the system and when you put it into use, coming back to our, uh, use case from this predictive maintenance thing in our factory.

Suppose, besides the technical inputs that they need, The sensors are collecting the vibration. Maybe the shift plan is also used as a factor to predict when the machine will fail, right? Because maybe we know there's three working shifts throughout the day. We know when this person was operating and that person was operating.

And perhaps the AI system could pick up and say, if this person is using the machine, It's more likely to break. Now, we could make some wrong inference about this and say, maybe this person is a really bad operator. For instance, you can misuse some of the information, you know, in the end of year performance review, when it's about job promotion and salary increases.

Maybe someone says, Oh, hang on. Actually, the machine was down a lot when you were just operating it. We need to be really mindful mapping those kind of data points to our systems, not to be suspicious about someone's performance in a way when it's actually not justified.  But then it's also about things that a person simply care about.

The idea of self determination or self effectiveness. So maybe I've learned a skill a lot, I'm working on this machine, it's my craft, I really master it in a certain way, and now I have this semi autonomous machine which tells me what to do and what not to do. And it will just tell me what to do. Not make me feel really great about my skills, which I have honed and improved over my whole career.

And yes, AI is good if it helps us to improve, but then it's of course important for anyone to make sure that whoever's affected by it directly or indirectly still feels good about it.

Now that we clarified what is trustworthy AI and what are the main three pillars.  Oftentimes we see people who are in favor of the concept of trustworthiness. They really support the idea and say, yes, AI should be trustworthy. At the same time, we see that implementing it from a practical standpoint really causes quite some additional effort on top of it.

So why should we actually care and why is it worthwhile doing all this extra work to implement trustworthy AI? Number one, it's a fairly easy principle saying no trust, no use.  If I'm the user, if I should make use of this AI system, be it predictive maintenance, be it something else, be it in a bank, in a hospital, in a school, if I as a user don't really trust the system, then probably I'm not going to use it.

Imagine you get a new device, you familiarize yourself with it, and then when you really have the feeling, I know how it works, then you're more confident using it. If we're making the systems more trustworthy, if people are more comfortable using it, the chances are simply higher that more people will use it.

And we as Applied AI believe this is really important to drive the adoption of AI at scale, and that's one main reason for us to support Trustworthy AI. The second main reason why Trustworthy AI is so important builds a little bit on the first one, as opposed to No trust, no use? We can say no compliance, no use.

Just recently, the EU has finalized what is now known as the European AI Act, the EU AI Act. It's a European law, basically. It applies on a more horizontal level, so across the different sectors, be it banking, machinery, medical, and so on. It's laying out the rules for anybody in the EU who wants to develop, Sell or use artificial intelligence, be it in a commercial setting or a non commercial setting.

Now, this set of regulation has what is called a risk based approach. So there are certain types of AI systems which are considered high risk. And before you can actually make them available, you need to meet certain requirements. There's also limited and low risk AI systems where it's not so stringent all the way to those low risk systems where you actually don't have any mandatory applications.

But if we're sticking to this high risk path, then we can really say no compliance, no use. You need to meet the rules before you can make business. It's like you cannot simply open a bank or sell stuff to hospitals. There are certain rules that you need to fulfill so that you can participate in the market.

And if we, as the EU, as the member states, want to have a prosperous AI economy, it is absolutely important to implement Trustworthy AI from the get go. And that's also why we at Supply. ai really want to facilitate and support navigating the AI Act and questions surrounding this topic so that we can really, um, ramp up.

competitiveness also in an economic and industrial standpoint. A third point why Trustworthy AI is important is because it's actually not too far away from what much of the engineering community wants to do anyways. When I talk to predominantly senior machine learning engineers and people in the technical domain, they really want to stand for AI systems that work as intended, that are robust to the environment, that are meeting really high standards of security and accuracy.

Now, these happen to be pretty much exactly the same things that the AI also wants them to do.  A main difference is the motivation. On the one side, we have engineers who simply enjoy solving really hard technical problems. On the other side, we have a piece of law which is basically forcing someone to do something.

And of course, no one wants to have their arm twisted or forced to do something.  But bottom line is, often it's for the very same goal. And we want to bridge that a little bit and make sure that implementing trustworthy AI is not seen as an unnecessary burden. Make sure that Becoming trustworthy is not a box ticking exercise, but it's something you actually believe in because you can see the value behind it.

The fourth reason why trustworthy is so important is a number of really important aspects on your way from developing a system towards putting it into service. Accountability, transparency and risk assessment. They are all under the umbrella of trustworthy. I, for instance, trust with the eye involves the notion of accountability.

Who is responsible for what? Think about, about it more carefully in the way how you assign responsibilities along the way. Another point is quantifying risks. So, we all can imagine that something might go wrong. But when we are serious about trustworthy AI, we really try to identify the risks more precisely and to evaluate them, meaning, what is their probability?

And if they happen, how bad is it? And this way, we can still take a risk, but now it's much more cautious because we can much better say what's on the table if things go wrong.  A third aspect here would be around transparency and simply making sure that whoever is affected by the system, whoever is using the system has the right information at hand to make good and sound decisions.

If I'm an economic operator, like a company, a large language model provider, and if I think about those points really carefully, then I can enable my customers and my suppliers to jointly make this work more better to the benefit of the system in the terms of AI really delivering a value. but also for the users in the end to be more comfortable using the system.

So now that we discussed what is Trustworth AI and why it's so important, let's have a closer look at how we at the Applied AI Institute for Europe help you implementing Trustworth AI in practice. There's two main things. Number one is that we're offering a growing portfolio of all kinds of different resources for you, which are free of charge, and you can find it on our website.

trainings and content material for you to read and better understand those requirements. It involves methods and databases that help you to figure out for your own use cases, for instance, which risk class are you under the AI Act. We have tools to help you making more robust and more explainable or interpretable systems.

And we also have other insights that give you some more in depth knowledge about what is happening in the ecosystem. So we conduct studies every now and then, and they also can assist you as an individual to advance and bring interaction, trustworthy AI in your content. At the same time, and this is the second part, we are also engaging actively with policymakers, both in Brussels, for instance, with the EU Commission, but also the EU Parliament, be it in Berlin, talking to different ministries, but also here.

regionally in southern Germany in Bavaria and Baden Württemberg. We talked to them. We want to be an active participant in this whole debate about how should those rules about AI look like. And we always try to bring in a practical perspective coming from conversations with you, individuals and professionals in the AI ecosystem, to pick up your understanding the concerns, the wishes you have and want to speak with one voice and make this information available to some others, making those rules because they all ultimately depend on evidential things that are happening to make those regulations more effective and actually feasible from a practical standpoint.

Now, wrapping up, we are at the end of this session. So thank you so much for dialing in. It's been a great pleasure of mine talking to you and giving you some insights about Trustworthy AI. Of course, make sure that whenever using generative AI or any other AI applications, that you keep in mind those principles, if you have any questions, make sure you write them down in the comment section.

We'd love to read your feedback, beat any questions or otherwise just type away. Also, don't forget to subscribe to our channel and give it a like. If you enjoyed this content, thank you. 

Share on:

Find out more about: