AI is taking the world by storm, but the opaque nature of its algorithms has also raised concerns about the reliability and the fairness of the output it generates, as well as the potential for the technology itself to be used for malicious purposes. There is a trust issue for artificial intelligence which needs to be resolved – not only for ethical reasons, but also because regulators need to look into AI’s "black boxes" to understand how they work.
Thanks to its transparent, open-source nature and its strong focus on auditability, blockchain has the potential to act as a decentralized, encryption-based guardrail for AI systems. Listen to Shawn Helms, co-head of McDermott Will & Emery’s technology and outsourcing practice, discuss the intersection of AI and blockchain with: Anne Gressel, Associate at Debevoise & Plimpton and member of its fintech and technology practice; Erwin Voloder, Head of Policy at the European Blockchain Association, and Kai Zenner, Head of Office and Digital Policy Adviser for Member of the European Parliament Axel Voss.
View on Zencastr
Senior Associate at Debevoise & Plimpton LLP.
Senior Policy Fellow, European Blockchain Association.
Head of Office and Digital Policy Adviser for MEP Axel Voss.
Co-head of the Firm’s Technology & Outsourcing Practice for McDermott Will & Emery.
The transcript below is auto generated by an AI. Although the transcription is largely accurate, in some cases it is incomplete or inaccurate due to inaudible passages or transcription errors. Please excuse any tpyos.
Shawn Helms
All right? hello everyone and thanks and thanks for joining us today. My name is Sean Helms on the head of the technology transactions group at the law firm of um mcdermot well in memory I have a I have a great panel here with me today. We've. All gotten a chance to know each other over the past few weeks and I'm um, excited to be doing this podcast with them today. So let's do a let's do a quick round of introductions or when do you want to get us started.
Anna Gressel
Um I can I'll jump in to start I am I'm Anna Gressel I'm from Dubb voice and Plimpton I'm a senior associate there and I focus on Ai governance regulation compliance helping companies build Ai tools really in every stage of lifestyle cycle and then defend them. Ah, before regulators or in civil suits.
Shawn Helms
Oh right? Thanks Anna.
Erwin Voloder
Great, ah, great to see everybody again. Sean, Anna, Kai my name's Erwin Voloder I'm the head of policy at the threep and blockchain association. So I centrally coordinate across the web through community and Eu institutions and raise all the levels at eu level that need to be REBlockchainDigital assets
Erwin Voloder
Um, and everything related to the development of smart contracts in the Eu single market. So.
Kai Zenner
and and I'm Kai Zenner from the european parliament working there for mep axlfor and epp group. And yeah, we are quite active when it comes to and Ai legislation or its so liability and Ai and data protection. Yeah.
Shawn Helms
Great! Great! Well thank you again. You know, really excited to be talking to you guys again on this topic. Um, we we heard a lot in the news about about artificial intelligence. Ah these days. Um. I have been I've been predicting an inflection point in artificial intelligence for about 10 years and I've been wrong for 9 for 9 of the ten years but I really think given all the movement and generative Ai now is sort of that. The. The democratization of artificial intelligence in a way that I don't think we've ever seen before and is truly the inflection point for this technology. Um as chat gpp sort of sprung upon the world and. And we saw generative Ai creating so elies of people that were flying around Tiktok and have had all kinds of generative Ai technologies and and not only images and text and audio it's. Is really I think raised not only the public consciousness but the public imagination as to what could happen with this technology. Um at the same time we're hearing a lot of doomsday predictions and and people are really worried about the technology and and. And we've had technology leaders like um Elon Musk and Steve Wozniak that are calling for a pause in ai development um in in a lot of ways to me at least it feels like a bit of the early days of the pandemic. Where it's a real fear of the unknown and people wanting to sort of you know clamp down and not and not embrace the technology but push back on the technology and I think a lot of that is around a lack of want a lack of knowledge and to. Lack of trust and so you know I'm interested to explore the topics today of sort of what blockchain might be able to do to help in this sort of critical moment of artificial intelligence and technology development and so so with that is a bit of a kickoff. Um. And can you give us an overview of what what is artificial intelligence.
Anna Gressel
Sure? Shawn ah, thanks so much and you know we've been working in the ai space I would say consistently since around 2017 and 2018 and what we work with from an from a systems or policy or even an application layer perspective. Very different from what we were seeing back then so I want to talk a little bit about what what I might call traditional ai and then I'll kick it into generative Ai which is is quite a bit different in terms of its capabilities so traditional ai when we started doing this work. Ah really was. A set of applications that were focused on using a large amount of data to make predictions or decisions based on patterns in that data so they were very good pattern recognizers and they were used to do things like determine who might be um, a good bet for a loan. Who um should be accelerated from an insurance underwriting perspective because they were very low risk and and companies could figure that out and move them through the insurance pipeline a little bit more quickly. They were also really good at doing things like detecting fraud noticing anomalous patterns in data. And then flagging anomalous transactions for a second look and so you know those kinds of applications I would say were developed really at the corporate level they were they were um, taskpecific and very good at executing on a particular task.
Anna Gressel
But we're seeing with generative Ais a little bit different and I know you know for folks listening who are like what's generative Ai those are models like um, chat gbt or dollies is an image generator. There are other kinds of generative Ai models those really go from just making predictions to you know as the name suggests actually generating text. We're generating analyses um generating images so we're not just making decisions or predictions. We're actually creating something and that is a little bit um different. It raises a few different kinds of issues from an ai I would say trust ethics responsibility perspective. The first is you know we're not just talking about again specific tasks. We're talking about very open-ended models that can be used for a lot of different things. They're multi-purpose many of the the foundation models at the you know what we call the bottom of that technology stack can be used to do everything from you know, make recommendations on or. Drive tax analyses. They can be used to write short stories. They can be used to write poems. They can be used to analyze ah public statements. They're really very multipurpose. They don't have to be trained for a specific purpose but they can be prompted by user. The second is that they're multimodal. So some of these models can actually do things like text to image. Image to text text tovi image tovi. You know they can analyze sounds depths 3 d modeling they can do a lot and that kind of multipurpose set of capabilities is really important when we think about what the models may accomplish in the future and why they're very different from those specific kind of. Task task oriented models and the finally and the final thing I would mention is that they're often in the hands of individual people and so these are not necessarily always technologies that are just the big corporate wall.
Shawn Helms
Um, more.
Anna Gressel
Many of them have been made available to users through um, kind of generally open user interfaces or even through um, open source kind of systems. Um, and that is important I think to Sean's earlier point about democratization. So we're really beginning to see the democratization of of this ai technology and many people are creating new applications using it.
Shawn Helms
Yeah that's great Anna um Kai I know you are instrumental and working a lot in the regulatory space for artificial intelligence. Can you give us a bit of an overview of what's happening in that space and what are the key principles. Ah. The regulators are looking at.
Kai Zenner
I yeah gladly and I think I would drawic um on a lot of those points that were just mentioned by Anna because it's quite interesting. Um ZZHall let's say let's regulate m ai movement which is a loan movement.
Kai Zenner
Um, really started already and one decade and ago so and already in 2017. For example, the Ocd was discussing how to regulate Ai and back then. It was not about um, large language models. So foundation models like mgbtfour and and shedbod apps like shetbt and bat as it are now recently only and developed but it was more about, um. New machine learning and deep learning technologies because already z technologies kind of challenged and our existing legislative frameworks for example in the area of liability legislation and many member states within the european union and. Face a situation where certain harms that are for example, caused by um, ah defect drones are just falling um on the head of my grandmother would not be covered and therefore my grandmother would not get any redress and um. Yeah, it was mentions that and certain elements like opacity complexity autonomy and so on are really leading to those legal gaps and based on this and let's say fi or these findings. And people would right? We're working on certain principles like um human oversight like technical documentation. Um that the data sets needs to be unbiers transparency. And so on and so on to yeah, address those Liga gaps a little bit. Um, um, let's say update our legislative Frameworks and um, also. Bid up. Let's say legal certainty that is needed to push forward the deployment of those new Ai systems and mzaiact in the european union is really based on all those prep work by Ocd but also by Unesco. And a lot of other actors and especially all suy and and harmonization and or technical harm manyy harmonized standards organizations like Iso m I e e e and so on and so on that also did already lengthy work on that and then suddenly like Anna was saying. There was a new kid in the block and so lllms and so on and so on that were really challenging all those legal or new legal proposals and because and for example, the a I act was really focusing on.
Shawn Helms
Method.
Kai Zenner
And machine learning system that has an intended purpose and is also having concrete use cases and as Anna said those new large language models can be used for thousands of different and use cases have not a real intended purpose. And now we need to start a little bit from the scratch again. So we have for example, Naus The Ai Act that again is kind of covering machine learning normal machine learning and deep learning systems. But we have this new type of Ai which is now becoming more and more. Popular and and big and yeah again, we we need to find out what we need to adjust and adapt because what was and developed internationally and in the european union is not really fitting.
Shawn Helms
Know Yeah, no, That's great and and you know kind you're you're you're highlighting a point I had in my mind and that is um I mean legislation tends to move slow right? and technology tends to move quick and and I think what I'd like to. Explore with you All is could there be a technological solution to some of the problems that that society views with Ai platforms and so with that in mind. Um, what are some areas that you all are so. Mean where blockchain and artificial intelligence are overlapping and what is that what does that look like.
Erwin Voloder
So if you want I could take that one. Um I mean at at at a bit. Can you hear me great I mean at a basic like you have decentralized infrastructure and blockchain technology they can act as.
Shawn Helms
Place to you.
Shawn Helms
Yep.
Erwin Voloder
You know, sort of like encryption backed guardrails for Ai systems. So in that kind of model an ai system can be deployed with those built-in guardrails to reduce their abilities to be misused or utilized for any kind of negative actions or behaviors. Developers of those Ais of those ai models could then encode specific parameters within which the ai can access. For example, various key systems like private keys and then these can be enforced with conditions with the help of of tamperproof technology like like Blockchain and other kinds of distributed ledgers and smart contracts. And also increasingly this has a great implication for oracles. Um, where I see the the flip side of that is that and and and andnna and ka have already touched on this these large language models and generateerative ai for for example, like Dali Midjourney etc they can create a whole bunch of really. Complex images and deepfakes like recently we saw that somebody created um it was like an image of the pentagon that was on fire. It wasn't real and the dow dropped 30 points right? So you can imagine sophisticated trading bots that are on the 1 hand primed to issue shorts. On certain stocks with respect to what like a deep fake has created to prompt that kind of market condition happening and then that compounded ad infinitum right? So on the 1 hand sure you know we can make the argument that you can use distributed ledgers to keep the guardrails encryption-based guardrails on those Ai systems. But on the other hand.
Erwin Voloder
We also have the problem of this de facto embedding those same risks and amplifying them in the event that somebody or a group of somebodies decide that there's a lot of money to be made in there oftentimess unfortunately and.
Shawn Helms
Yeah, and it's great point. Um Anna for you? Um, you know when when I think about sort of the the ethos of Blockchain. Um, it's very open. Um. Most companies open source their platforms. Um part of the draw to Blockchain is this you know auditability and transparency and and it's it's it's um I think the just sort of. Openness of everything in the blockchain world is a bit um contrary to historically how people have viewed Ai and and sort of by the nature of the technology in a way where sort of you know data goes in There's this sort of black box. Processing and predictions come out and and even with even with large language models and generative Ai. There's been lots of questions about how does the technology work. What has it been trained on and and and not a lot of clear answers around that and so you know. Um, in some ways I see these ah 2 technologies approaching what they do from opposite ends of the spectrum from a transparency perspective and you know I'm interested in your view on that and how these technologies might be able to complement each other.
Anna Gressel
Yeah I think that's such a good question and it's if I had to predict I think it's an area where we're going to see a lot of development just from a solutionsbased perspective and when you think about what the drivers are for that. It's I think it's not just this question of can we open. The black box because we want to because we think it's you know more trustworthy or ethical but also because regulation will compel us to and you know
Kai Zenner
may want to chime in on this later but 1 of the core um core kind of concepts within the ai act. Is this is this idea of information transfer between different actors or ah clarity of information transparency between developers and um and government authorities for example and so it it asks for all different kinds of mechanisms to make that happen data governance on the 1 hand.
Shawn Helms
Oh.
Anna Gressel
Risk management related to Ai systems on the other auditability record keeping logging I mean these are all transparency kind of by different words and in different terms and so I think the question is on a ah number of different levels. How does blockchain potentially help but help with that. Um, you know, notwithstanding the fact that there are. Kind of complexities I think we'll talk about this on scaling. But I'll let me put them into a few buckets of at least some promising areas that I see ah the first would be with respect to inputs to Ai systems. That's usually what we would think of as the data used to train ai systems or run Ai systems. Blockchain the ledger capabilities I think offer us some really interesting options around um record keeping and data provenance to make sure that the inputs that to Ai models are actually of very high quality and are sourced from kind of clear. Trusted reputable sources that may not work for all different kinds of Ai. But at least for some kinds of Ai that actually may offer a really um, interesting benefit in terms of making sure that that data is trustworthy in and of itself when that matters for applications where that matters. Or you know I think this has been in the news lately. Potentially that there's traceability with respect to things like um, consent to to date to data use or intellectual property rights with respect to to that underlying data whether that ip write has been granted that can potentially be recorded in the blockchain.
Anna Gressel
But it'll take some effort to to see how to make that work and I think we'll see some folks working in that direction. The second is with respect to the functioning of the ai model itself and I mentioned auditability auditability earlier and I think it's a really interesting question to ask whether Blockchain could be the right place to record. Decisions made about um, consequential. Ah you know impacts on on particular people or or particular decisions made by the model and so you know does wording a decision in a blockchain so you could go back and audit it later possibly. But I don't think the technology is there yet.
Shawn Helms
Oh.
Anna Gressel
But it is 1 way to think about you know whether that automated decision could be recorded and looked at later and whether there's like ah, an interesting and useful record be made and the final thing is on the output um of the model itself at just getting back to I think one of the points Irwin raised earlier. There are these really tricky issues kind of coming up around deepfakes and and credibility of information and so if there can be some sort of um record kept of like unlike data provenance this is actually about. Credibility of the output is it. Can we Mark something as being created by a deep fakeke or conversely can we Mark something or created by an ai model or can we Mark something as being created by a human. So we know the providence of the output and whether it is whether it was modified whether it it was affected in some way by an ai system. That may end up being important down the road for ah for things like verifying news content. But again I don't think the technology is quite there. These are just ways of thinking about future applications that may show promise in light of the challenges of Ai.
Kai Zenner
And because Anna was mentioning me and I found her list and also ern examples extremely good and um I would mention the same and because Anna was mentioning as eveushan in the ai act and this was 1 thing that I also wanted to ah. Underline and Ai and especially their foundation models and I would seeize them or consider foundation models as really the new general purpose and technology like the internet fire iron and so on and so on and because we are there really at the beginning and of course the market. Is still ah rasa om and everything is possible. Of course what can happen again is that we have and after a few months ah complete market concentration. So foundation models are dominated by. Few companies like oh may I deepmind and so on and so on but and especially if we take blockchain and a I together they they give us a chance to really rebuild also a little bit our economy to. Yeah, engage more companies and to help them with information sharing. As Anna said we and the european union now put a lot of effort in and the so-called and.
Kai Zenner
Articleica 28 on responsibility along the ai value chain where we are saying companies. Okay, it cannot be the case that only as a downstream and provider of an high-risk ai system needs to be fully compliant with the Ai act. Ah, he or she needs to get all the information needed by for example, dataset supplier like Google or and from an um foundation model developer like Oma I and so on so we really tried to and to push everyone and. Into a direction that in the future. There is just more exchange and of course um, still considering that there are trade secrets and so on that you cannot share but the rest you should share and more in the future. And again blockchain and Ai then could really kind of create decentralized marketplaces which involve much more actor smaller actors include s and e startups much more in a market like i'd. Like it was not always the case before in the digital market. Especially when we are looking now at platform 1.0 where only really a few actors are dominant and basically all the rest of the economy are just customers that can just.
Kai Zenner
Takes a product and use it basically to build on top another servers So this I think makes me really excited that here. We can maybe yeah create a new kind of economy at least in a small area. Yeah.
Shawn Helms
I that's great. Um Irwin I know you and I when we were in when we were in barceloan and together. We're talking a bit about um, autonomous agents and other Ai interacting with the blockchain.
Shawn Helms
Um, interested in your view on that and if if if you think this is going to be a problem or is it going to be an opportunity.
Erwin Voloder
So um, funny enough you mentioned that because recently the um the maker dow ecosystem. Um, they released their endgame proposal endgame phases 1 through 5 and what this essentially does is it creates.
Erwin Voloder
Sub-ddaws or Paradas which is like you can think of it as nested decentralization and in phase 3 specifically they talk about the introduction of governance ai tools that will be launched to help with through proven governance and monitoring and it will be aligned with the so-called alignment artifact. Are going to contain all the principles rules processes and the knowledge of the maker doubt ecosystem and then these are going to be optimized with those governance Ai tools creating a so-called ecosystem intelligence is what they're calling it that will accumulate knowledge and then help improve those processes and decisions over time and then. There will be a fund a purpose fund that that will be spun off from this for the development of these free Ai models and tools for so-called socially impactful projects so you can already see among major protocols and and and makes one of the largest by tvl and and defi space. That there is a concerted effort to already start using these autonomous agents but at the same time creating I'd say nested dependencies within the ecosystem because when you start talking about nested decentralization and daws within daws that are that have their own their own stable coins like in this case, separate from maker and die. But still fungible with maker and die and then you have these ai tools that are being used in these separate layers. But then you still have the abstracted top layer. Yeah I think it's still really early to say how that is going to be in practice because this is a protracted kind of time horizon and I'm using 1 very specific example but in general I think another.
Erwin Voloder
Challenge of using autonomous agents within decentralized systems is first of all is this autonomous agent simply like ah like a helper drid in star wars or does this thing have an identity agency. Can it a test tenant trigger payments can it operate a validator node. Is it a delator these are going to be questions that will require a completely different rethink of how we're how we're looking at at liability and how we're looking at agency under eu regulations like at basic you know a reappraisal of the eitis framework. To include autonomous agents I think it's going to be extremely important within the guy x ecosystem. You know you have move id which is already implementing the use of autonomous agents and mobility systems right? that can build on on decentralized on web street platforms using smart contracts or specifically within decentralized marketplaces like what. What
Kai Zenner
was already discussing. Great example. Um, so I think we're always in the situation where like what developers can cook up in a lab is is now here but the regulation is always playing Exante ketchup and I I fear that we're running out of time in terms of closing the gap. These things because smart contracts and Blockchain took 15 years roughly to get to where they are now but the exponential growth of the artificial intelligence ecosystem is I think much more hyperbolic and we're talking about a hockey sick versus just you know.
Erwin Voloder
Slope Um, and I think that this is the fundamental challenge that we are. We're racing against time.
Shawn Helms
I yeah, great. Thanks for that Irwin. So um, as we think about autonomous agents interacting with the blockchain becoming part of the blockchain. Um, one thing that I think is certainly on the regulator's mind and is. Is talked about a lot is having a having a human in the loop and you know and I know you have thought a lot about um issues around artificial intelligence and is sort of part of the solution having a human in the loop and. How do you view that and and its interaction with blockchain.
Anna Gressel
Had such a good question and it I think it's a really tricky one because in some respects I mean I think it gets back to this point that
Kai Zenner
raised earlier which is. Hard to make a general rule with respect to so many different technologies and so my perhaps controversial take on having a human in the loop is that sometimes you don't necessarily want 1 or need one It's really in certain contexts. It's really. The speed of um, the transaction or the speed of the decision that's going to be helpful and the human the level of human oversight and where human oversight is executed or implemented is going to depend on the risks and versus the benefits of having something happen quickly. So. Um, you know one of the ways to think about this is really outcome oriented. You know where what is what is the right outcome for the system and going back from that does a human decision at you know point a or point b make sense. Or is it really the case that you want to have ah the process unfold and humans to be able to go back and undo it later or stop it if it seems to be going far and off off the tracks and have some sort of monitoring in place for that I think all of these are going to be.
Anna Gressel
Ah, combination. You know we can talk more about this a combination of a human and ah machine system with the humans and machines working in Tandem and so you know we're really beginning to define a future in which we're all going to be doing that in some way I mean whether it is. You know Microsoft word containing genive ai add-ins you know, ah in some respects of what seems like a pretty easy use case where I can just say okay I'm like I like this text or I don't like this text I'm a fluent user of Microsoft word to more complex robotic systems. Even more complex potentially trading systems at every point we're going to start having a world in which we have humans and machines and we're going to have to define what that interaction looks like and so I don't know if I have ah a general. Answer to that even with respect to Ai agents because tomorrow I could create my own Ai agent. You know the kid and the you know in the garage next door could create a completely different Ai agent for a completely different set of purposes. But I do think. Ah, you know regulators on the 1 hand but also companies and and society more broadly are going to have to think about what kinds of tools we want to put into people's hands in the first place and whether we really need to think about ah leveling up education actually.
Anna Gressel
To make sure people understand the power of these tools and how to use them responsibly because so much of this is going to come down to individual use and individual deployment when we start having tools that are customizable. Um, so you know that's my my view. But again you know I think others would would take a different approach. Um, and I'm curious for for thoughts of the other panelists because I do think it's a tricky question.
Shawn Helms
Yeah, and you know
Kai Zenner
I'm interested in hearing from you in particular I know the Ai act contains sort of provisions around you know, kill switches and circuit breakers and I'm you know I'm interested in.
Kai Zenner
I now and so and there will not be a dispute between the and and me because I completely agree with what she said and um, luckily from from our political side and also the rest of the parliament.
Shawn Helms
And where your head is at on this issue.
Kai Zenner
Agreed after lengthy, the m debates on it because and as Anna was saying at the very beginning the um, the ai act originally was drafted in a way that they are ah horizontal or that say is a horizontal legacy legislative framework with rules. That are applicable for every sector for every use case in the same way and this would mean yes there is indeed a kiss which in article 14 and paragraph two e so on human oversights that would and apply it to everything so 2 smart contracts to. Um, a connected car to an um Ai-d drivenven vacumor to whatsoever and and sometimes a kiss which doesn't make any sense so Anna was already and mentioning a little bit. There is for example and.
Shawn Helms
Right.
Kai Zenner
1 ai -driven um robot that is doing surgeries um on the eyes. This is always the examples that I'm using because in this specific case. There are studies that if you don't interfere with the robot most of see. And and surgeries are going well and the accident rate is really really low. But if you allow as a doctor to just stop the operation or really interfere in an active way. The number of accidents is skyrocketing so in this case, it's 1 of the examples anna was giving there shouldn't be any human in the loop. At least if this human is able to interfere because it's making it actually more risky. It's increasing the risk. And and and this is why as parliament and again after langsy debates we changed the complete approach of the ai act and with a huge change in article 8 which is kind of an umbrella article. For all other and high-risk obligations and we are now saying there that basically also high-risk obligation like human oversight need to consider the context of the deployment see technical harmonized standards for example from sense anlecs that are specifying.
Kai Zenner
Source articles and so on and by doing that we have now a kind of um law with general principles which is good because then we have a kind of minimum standard for all those use cases. But then it's really yeah, you really need to do an assessment. Okay. What is C ai system how it's used who is using it and so on and so on I think this is really the best way forward and is diffusing a lot of the problems that we would otherwise have.
Shawn Helms
Yeah, it's it's it's interesting I I make a somewhat controversial prediction when I talk about artificial intelligence where I make the statement that I think the state of California in my lifetime will outlaw human drivers because it's. Irresponsible to allow a human to drive a car. Um, when yeah when a Tesla crashes because of autopilot. It makes the front page of the news. Well you know why is that is because it doesn't ever happen but it doesn't make the front page of the news when you know. The drunk guy crashes into you know a building because that happens every day right? I mean humans are prone to mistakes and so the the you know the idea of having a human in the loop as being the answer I think is an interesting one. So.
Kai Zenner
You? Yeah, maybe maybe just because it's really interesting what you are saying I think this is a case between the United States and for example, a country like Germany where you also need to take into account a lot. The.
Shawn Helms
Um.
Kai Zenner
So general the mood among the is citizens a little bit or also like cultural differences because in my country most people even though they know that machines are less thrown to errors They wouldn't trust a connected cars at least for the next decades and really decate.
Shawn Helms
Well I think I think that's true I think that's true in the us too
Kai Zenner
I'm just I think I think at some point the statistics are going to become overwhelming here. But anyway um, Irwin sort of question for you. We've talked a lot about how.
Kai Zenner
And.
Shawn Helms
Um, Blockchain may actually be able to help ai and what some of those interactions are talk to me a bit about scalability because you know in the certainly in the early days of you know Blockchain application scalability was a real problem. Um I remember hearing the you know outrageous amount of ah, processing power that you know cryptokitties was taking up on the ethereum blockchain and it sort it sort of blew. People's mind about hey is this technology ever going to be scalable and now and now. This panel was talking about having blockchain record every interaction of artificial intelligence in order to make it sort of transparent. Um it does. It doesn't seem like Blockchain would be able to do that. Um, and I'm interested in your view.
Erwin Voloder
So I'm just going to backtrack quickly on the issue of the kill switches because and then that I'll get to your to your to your question just quickly just like you have kill switches that are being discussed within the context of the data act you have so-called safe and robust termination under article 30 of the part of the data act. Versus the Ai act right? Which is what Kyle was talking about and I think it's interesting because when you're speaking about safe and robust termination in the context of the ai act so of the data act so looking at smart contracts narrowly within the context of iot devices. Um, this kind of ecosystem is one where both. Iot devices and blockchain and artificial intelligence should exist in some sort of milangge because you're going to need sophisticated tools to be able to make sense of that telemetric data impute it and then timestamp it in some append only ledger that can secure data fidelity and providen that's being said a lot of the problems regarding. Using kill switches in Blockchain have been developer fat fingers like when ondo finance locked up six hundred thousand usdc and tanked their entire protocol um salana upgrade function again. This is an issue of where just like in the case of the doctor if you let the person sometimes eventually statistically. Ah, fat finger could lead to something and and in this case that usually means that a lot of money gets flushed down the drain um with regards to standards. So I mean the commission's also doing the same thing with respect to to the the to the data act and smart contracts right? So you have Betsy Andenn sinlec developing or going to propose a hand to- harmonized european norm.
Erwin Voloder
You have a lot of work being done in scpdlsixpdleleven with respect to permission ledgers. So I think that these 2 things are like happening in parallel and there's going to come an infliction point when both the way that safe and robust termination is defined in the data act and the way that tilt switches are looked at in the Ai act. Are going to have to come to some sort of harmonization for these things to communicate in the future and cross-poinate. So I just wanted to to briefly discuss that regarding scalability. Um, it's a big problem right now in general when you talk about just like you have the the triffin dilemma in. Traditional capital markets. You have the the Blockchain Trilemma of scalability immutability and civil resistance where you can only hello can you hear me can you hear me.
Shawn Helms
Irwin I think erwin I think we're losing you a little bit. Um, yeah, it's it's it's yeah, it's hard to hear you you're breaking up a bit.
Erwin Voloder
Can you hear me now can you hear me now.
Shawn Helms
Let me, um, all right, all right? Let me do this. Let me do this and um, maybe I can ask you to comment quickly on the scalability issue.
Anna Gressel
And sure I mean I I I wish I could do it justice in the same way that Irwin could but um, on scalability. No I mean do you think we're seeing that's likely to be an issue in the future in terms of recording. For example, all of the decisions of Ai on on a blockchain. That's. Incredibly difficult to do and I think even outside of the the scalability challenges posed by Blockchain that is incredibly difficult to do with respect to ai more broadly in any system and so finding the right way to um to record. All of the different decisions and inputs to an Ai system is you know that's a that's a big technical challenge and figuring out how how to preserve that information is a big technical challenge I'm not sure that I think it's going to be solved right now but there are some upcoming. Um. There are some upcoming proposals in the Eu that might make that more important for companies to consider and
Kai Zenner
may want to weigh in on this but that's a place where the eu ai liability. Um, directive is is going to potentially have impact because. There you know I think the concept right now at least in its early stages is that you know if you didn't preserve and make that and information available. For example, if someone was hurt then there could be a rebuttable presumption For example that you had caused harm and so.
Anna Gressel
You know that's that's going to shake out in ah in a huge amount of ah, additional legislative work. But it is to say that companies may begin to look at that more significantly. Um on scalability. The other thing to keep in mind I think is that Ai scalability right? now is going in the direction of scaling to larger and larger language models. That may also change as the computing power required becomes a scarce resource and the value of being able to compute in large language models. Um on mobile devices. For example, becomes a business interest and so we are actually seeing experimentation with smaller language models. Smaller data sets for example and so you know the scalability pendulum may so may swing in the other direction but it is certainly an issue that I think many people are watching both from a regulatory perspective but also a competitive landscape perspective.
Shawn Helms
Um, yeah, please. Yes.
Erwin Voloder
Could I could I jump in now. Can you hear me? Great! Great um, no I agree with everything ina said and honestly what I was trying to say before was that you have the problem of of scalability. Civil resistance and immutability and this is kind of like the the blockchain the famous Blockchain Trilemma but there's 3 ways I mean right off the bat where you could say that ai might make a difference. The first is looking at efficient resource allocation so ai could potentially predict transaction patterns and then it could adjust those resources accordingly to optimize for network throughput. Another one is through data pruning and compression so different Ai techniques. They could be used to minimize the amount of data stored on a blockchain without actually losing any critical information to improve scalability and another one I mean just improving consensus mechanisms honestly having Ai algorithms. Design more efficient consensus mechanisms that reduce the need for computationally intensive processes like like proof of work and we already see this with respect to what's being done even without ai like in heterogenous sharding for example, um, or or the way that the way that 0 ero-nowge proofs are being used to take a lot of that. Heavy computation off-chain with respect also being privacy preserving. So I think it's can be really interesting to see how you can apply those ai tools for example with respect to designing more efficient consensus data pruning and compression but also combining that with with 0 knowledge technologies.
Erwin Voloder
I Think that that's going to be a real game changer with regards to how that space pushes forward and how you could potentially overcome those issues with respect to large or small language models going on chain.
Shawn Helms
Yeah, great All right? Well we're we're about out of time here team but I guess as as a final question I'd ask you to look into your crystal ball a bit and predict if you think blockchain can help. Ah. Solve the trust problem that that we seem to have with artificial Intelligence. So and I do want to get started. Yeah.
Anna Gressel
I I'll been yeah, sure. Ah I'll jump in and say I think there are ways in which it will help and ways in which it probably won't solve everything certainly in part because Ai systems are designed to be human run. Um, even if they have autonomous. Elements I think we're going to see a lot of collaboration between humans and machines going forward and some of that is about governing humans and not just the machine part of it. So I think we're we need to remember that the humans are part of ah both the promise and the challenge of Ai and and on the technical piece I do think blockchain will offer some. Ah, very interesting options in terms of at least at least mitigating a few of the risks we've identified in Ai so far and.
Shawn Helms
Great! Great
Kai Zenner
What's your thought.
Kai Zenner
Yeah I just want to focus in my answer on one point. So I think again, what makes me really ah, feeling excited about ah Blockchain and Ai is this huge potential of the open source community and and so. If we if we are really um, letting them engaging with with both technologies and also other technologies I think it will help us to make um, what is happening ah much more transparent and. Maybe address a lot of concerns and also problems as that are existing until now. And yeah, so I see a huge potential from this rather civil society perspective but also. When I'm now looking at the european industry for example and they are also huge opportunities if all those actors working together in the future on a much better and way but also for for companies to draw on certain and certain. Ah, results or models and so on that have been developed by the open source community. So yeah, this makes me very ah enthusiastic. Let's say.
Erwin Voloder
Okay.
Shawn Helms
So that's great. Um, Irwin what are your thoughts about blockchain helping artificial intelligence in this space and.
Erwin Voloder
I think that there's there's definitely a bright future if we can calibrate at an early stage the way that these 2 technical substrates will interact and I think we need to move towards what I call centaur regulation. So in the same way, you know the early days of of a playing against machines chess um, you had this brief period where people and machines working in consort. So like centaurs were actually winning against the machines and I think that the more and more ai becomes sophisticated and it's ah supplanting a certain percentage of. The marginal productivity of human labor I think that blockchain will by necessity be some way to secure data fidelity through this and I think that we need to start recognizing that the workers of the future are going to be partially centaurs and also fully machines. So when we're transitioning our regulatory framework. We need to start considering both the centaurs and the machine as a consumer economy. Otherwise all we're really doing is making rules for a snapshot in time be them for blockchain or for Ai and we're we're missing the forest for the trees so to speak.
Shawn Helms
You know? Well, that's great. Well thank you all for the interesting discussion Anna, Kai and Erwin. It's it's always a pleasure. Thank you for your insights and hopefully you know Mor to. You know more to come on this I hope we can keep the dialogue going. But thank you all again for joining the panel.
Birds of a feather can make progress together
At Owl Explains, we collaborate with trade associations, think tanks, policymakers, and industry partners to further understanding of blockchain, crypto, and Web3.