This time, my guest was brilliant Jeannette Gorzala, Attorney at Law and Vice President at European AI Forum. We discussed the challenges of defining AI, reasons for regulation, and the EU AI Act’s role in harmonising governance. We also touch on global perspectives on AI, transparency, and trust, risk differentiation and concerns about overregulation favouring big players. And something that a whole tech community seemed to live for a couple of weeks – OpenAI saga’s safety implications and legal challenges, including liability and IP rights. Explored the transformative impact of AI on society and the delicate balance business leaders must strike between innovation and compliance.
Transcript:
If you look at Europe now, and we compare this
very honestly to the US, you see that a lot
or most of the innovation comes from the US.
And why is that?
I mean, the I act is not enforced now, so it
cannot be the primary cause for kind of like the lack
of European innovation here, because it’s not even here.
What is actually the problem? Funding.
So the US and also the UK has
lot of private funding, more than in Europe.
So that’s actually a gap we need to close.
And talent retention.
There was a study by McKinsey where you
see 27% of European high educated AI talent
moves to the US and another 11% stay
in the US during their undergrad exchange semesters.
So we actually need to work on funding and
talent retention, because the AI act cannot be blamed,
because it’s not even in the world.
Hi, this is your host, Kamila Hankiewicz, and
together with my guests, we discuss how tech
changes the way we live and work.
Are you human?
Jeanette, it’s a pleasure.
I know it’s been a very exciting,
very busy week for you last week.
Thank you very much for the invitation.
I’m excited to be on the podcast and share all
the happenings of last week and also the road ahead.
What awaits us?
2024 plus.
So how did it happen that you got involved
and focused on AI regulation and emerging tech regulation?
I know that you’ve been working in the past
a little bit on blockchain and other emerging techs,
but right now you are solely focusing on AI.
Yes, actually, it’s correct.
So I’m actually a little bit of a funny
creature because I’m the interface between law, technology and
business, because in my background I’m a lawyer, but
I also have an international business degree.
And as kind of like history goes
along, I’ve been working very much with
tech companies, acquiring, selling them, product development.
And it was very interesting to me always
what we can achieve actually with technology for
society and on the business side and putting
all those building blocks together.
One side of my life is being a lawyer, focused on
AI law, so advising companies and businesses, and the other part
of my life is working for the European AI Forum.
So really representing more than 2000 entrepreneurs in Europe
with regard to digital legislation and governance, in particular
the AI act and this AI forum, is it
your organization or you are just part of it
and you are responsible for any country in particular
or role do you play there?
Thank you very much for this question.
So, the AI Forum is an international profit
organization based in Brussels, and we incorporated it
only this year, although we’ve been working together
for more than three years.
So the AI Forum has been part of the
discussions on AI in Europe from day one.
And I’m very, very proud to say that this
year, in May, on 15th, actually, we took the
leap and incorporated our organization, and I was instrumental
in drafting the charter, setting up the operations here,
which makes me extremely proud to be a part
of this organization’s founding process.
On the one side, you ask about how does AI forum work?
So it’s basically the umbrella organization
of all national european AI associations.
So we span entire Europe from
Netherlands, then of course Germany, Austria,
Poland, Bulgaria, Lithuania, Croatia, Slovenia.
And whom did I miss?
I don’t know, but kind of like we are in
total nine associations and we’re looking to expand to cover
all 27 member states of the European Union in the
future to really provide the entire picture.
And do you in any way work with the UK?
I know that right now it’s a bit tricky.
We had AI summit, was it a little bit than month ago?
Is there any collaboration there as well,
or you are working separate in silos?
So for us, of course, the UK is an important
partner, even though we know it’s not part of the
European Union, but still we consider the UK part of
Europe and as such an important partner in particular.
In view AI, we monitor the
landscape there, of course, closely.
Also the UK safety summit, which I think
was on 1 November, and all the other
developments going on on the regulatory and governance
side, also the declarations last week, I think
with the cybersecurity, safety standards and everything.
So we are very much in touch with your
ecosystem and also always look for exchange ways to
work together, to partner and to collaborate.
Because of course, since we represent the european
AI industry, but technology is global, right?
And in our view, everything depends
on strong global partnerships and working
together to achieve best results.
So that is, of course we are very
much in touch with you also in the
UK, because actually there was nothing in particular
decided yet that it was more discussions and
just to understand the view of different stakeholders.
But do you feel there is any direction, both
the whole Europe, right, the European Union and UK
should be focusing on or should be going towards
the things which have been decided on AI act.
Do you feel it’s a good direction?
I feel it’s a good.
I also, and also making reference again to
the UK Safety summit with the Bachelor declaration,
I think it was where all the participants,
including of course, the European Union, acknowledged that
there are risks to be mitigated.
But of course also huge possibilities in AI, and
I think the direction for all those huge markets.
So us, UK, Europe is the same.
So to mitigate risks where mitigation is
needed, but also to leave enough space
for innovation and new developments.
Yeah, exactly.
That’s actually one thought I noticed
and let me just find it.
So, yeah, I read just before our interview that in the
neuron, it’s a newsletter, said that the prevailing view of the
Silicon Valley and actually the whole community, is that the AI
act is at best overly broad and vague, and at worst
an abuse of authority and will kill AI innovation in Europe
and shift it entirely to the US.
What do you think about that?
Well, it’s an interesting point of view, of course,
to be considered, I think, a little bit differently.
Why do I do so, looking at Europe, we
currently have a lot of different legislation, right?
Because there is no harmonized AI framework.
So a lot of uncertainty, a lot of legal barriers.
And the I act will actually remove all those legal
barriers because you will have a level playing field.
Startups, scale ups can scale instantly in the
entire european market, which is a very detractive
one and a very underserved one.
So I think the AI act
will help actually in this regard.
And I think one point that is not often
mentioned, but needs actually to be driven a little
bit more up the agenda are the innovation measures
included in the AI act, right, because they’re going
to install AI sandboxes to promote innovation.
Real world testing shall be permitted, in particular
in those high risk areas with safeguards.
But still, you’re opening up spaces and
also you’re protecting startups and SMEs and
smaller players from imbalancing contractual relationships with
unfair contractual clauses and unfair contract practices.
So I think those are actually good points of
the AI act that often are forgotten because of.
They’re in the back, right, but they
are often forgotten in this discussion on
biometrics and prohibitions and compliance.
So I need to mention this here, and I
think that will actually enable the european AI ecosystem.
And also one point I would like to make, because I
hear this very often, if you look at Europe now, and
we compare this very honestly to the US, you see that
a lot or most of the innovation comes from the US.
And why is that?
I mean, the AI act is not enforced now, so it
cannot be the primary cause for kind of like the lack
of european innovation here, because it’s not even here.
What is actually the problem?
Funding.
So the US and also the UK has
lot of private funding, more than in Europe.
So that’s actually a gap we
need to close and talent retention.
There was a study by McKinsey where you
see 27% of european high educated AI talent
moves to the US, and another 11% stay
in the US during their undergrad exchange semesters.
So we actually need to work on funding
and talent retention because the AIA cannot be
blamed because it’s not even in the world.
Yeah, you’re right.
And I think I’ve seen this chart.
There was also some movement of talent from the US,
but it was also to Asia, some of it.
So where do you think China, predominantly China fits
in it like they want to lead AI revolution.
So where do you see it?
Know, with all the parties, US, UK, Europe, how
should we make sure that we work towards the
same goal of betterment of humanity and not wasting
time for competing against each other?
I think that’s a very important point.
And also, of course, taking a global view, like,
of course, including Asia with China, Japan, for example,
if we look at the hardware industry, Vietnam is
actually the key player with chips manufacturing, right?
But also including all the other
continents, Africa, big point, right?
If we look at kind of like, who wants to lead?
I think everyone wants to lead.
Because you will find in the Blatchley
declaration that the UK wants to lead,
kind of like global collaboration.
You will find in the executive order of President
Biden that America will want to take the lead.
Of course Europe wants to lead.
I think we all want to lead in all this.
Kind of like taking responsibility with leadership.
We must not forget that it’s
something we need to do jointly.
So collaboration before one nation or one market,
taking the lead, I think will be key.
And here I think standards will play an important
role because every market has its governance framework.
It may look different, I think the direction is
the same, but we need some interoperability, right?
We need to work with each other.
Software is not something that is developed in
one box, in one company, in one basement.
You have a component from China, maybe you have
a component from the US, you have some APIs.
In Europe, everything is put together under
UK interface or in various constellations.
So that’s what I think we need to
figure out how to work with each other
with compatible standards to make this work.
And where are we now at this?
I think we are starting and I think we
need a little bit more speed on this, because
at least, speaking from the European Union side, I
know that standards processes have been kicked off and
are working in parallel to the AI act.
And there shall be, or at least it’s planned
some first announcement on standards for AI at the
end of this year, maybe beginning of next year.
So there are works being done.
I also saw, of course, President Biden asking
the standardization bodies in the US to kind
of like measure gaps, see where standards are
needed to come up with new proposals.
And of course, also in the UK,
which is, I think, very good.
But I think we need to put this
all together into one picture, because I think
the technology market is a global one, very
integrated globally, with applications spanning the entire world.
And this will actually, I think, be the deciding factor,
charging actually the frameworks we have, because the governance framework,
if we refer to the citation you read in the
beginning, is of course, a nice framework.
It sets some boundaries, it
sets the direction, the course.
But how will we make the steps in this framework?
That’s for me, the standards. Yeah.
Putting things in action instead
of just writing papers.
And I know that one of the aims
of the AI act is to protect democracy
and the processes like elections from what’s been
happening more and more now, the deepfakes, AI
generated deepfakes and other sources of misinformation.
And this is quite crucial, especially because
next year we have the EU elections.
So has there anything been decided around
this in AI act or other initiatives?
Yes, there is, actually, and was originally
in the proposal of the European Parliament.
And you, of course, know that the final text of the AI
act is not here, but what we see from press statements.
And this is the information status we
have that using AI for influencing elections.
So the decision making process in elections shall be
subject to the high risk category, so it’s not
prohibited, but high risk, so subject to strict compliance.
That’s the one piece of the puzzle.
You mentioned the problem of deep fakes.
So what we think will be in the
final version is that artificially generated content needs
to be marked with machine readable form.
So to distinguish that generated content from originally
created content, if you would like to put
it like that, and marking of manipulated content.
So if you kind of like change the context,
for example, of two politicians shaking hands that didn’t
shake hands and so on and so forth, that
needs to be marked for transparency reasons, to indicate
that this picture has been manipulated.
I think those are the main points where you
could say there are touch points to democracies and
political elections and public perception and public influence there.
In the I act, there are some carve outs
for media, so for satirical use, where kind of
like you have the freedom of expression and creative
expression and so on and so forth.
So it’s not kind of like entirely mark everything.
So it’s nuanced.
But I still think it’s good
that we have those transparency requirements.
It’s good that those AI election
influencing systems are under high risk.
But I still think there is a whole lot
that we will be dealing with in the future
and considering it’s not only the European Union.
Right, so it’s also the US elections next year.
It’s actually super election year.
Yeah, exactly.
And knowing how fast it spreads and how difficult
is it to correct news is quite tricky. Yes.
And I think it’s actually something that
cannot be done by regulation because you
will always find some troll farms.
Disinformation as a service is
actually a business model.
I didn’t know that until I read this in
the, I think, Freedom of Internet Report 2023.
So you have a whole industry on disinformation that
can spread fake use, easy, cheap and infinite.
So I think media competency and AI competency
will be key for the future to train
the students, our children, to critically look at
news, to critically question, can this be real?
Does this make sense?
How do we work with those contents?
Because I think we can regulate that, but
we will not prevent it by regulation.
This is something we need to
address by competence and education.
I think that is the only way how
to combat those risks actually in the future.
Because we can ban deep fakes. Right.
But a troll farm will not care about that ban.
Of course. Yeah.
I guess it’s always the
question of following the money.
So it’s very tricky for people to
understand if they see only misinformation. Right.
And then you are trying to change their view.
I saw your post the other day.
You wrote about the aspects of
biometrics and the law enforcement.
There was already something decided as well
in the AI act right around this.
It was actually very surprising because we thought
that the foundation model aspect of governance would
be the one where we spent, or where
most time is spent in the negotiations.
But surprisingly, there, a compromise was found quite
quick, even if you consider the lengthy process.
But still, and the, let’s say, biometrics law enforcement
AI bans were actually the hardest knot to untie
in the trilogs for the AI act.
Because of course, you have two interest groups, right?
You have, on the one side,
prevention, kind of like of crimes.
You want to use AI for security purposes because
it’s a very powerful technology, can speed up the
process a lot, in particular with biometrics and everything.
Then you have the other side with
the privacy of citizens and fundamental rights.
And bringing those two into a compromise was super
hard because, of course, there was the demand for
national exemptions to use AI for law enforcement purposes.
So biometric recognition on public spaces,
this was a very hard point.
And then, of course, there are some other
bands touching upon some aspects of predictive policing,
touching upon biometric categorization for law enforcement and
emotional recognition in the workplace.
And when we look at the genesis of the EI
act, so starting with 2021, the proposal of the European
Commission, you actually had only four areas of prohibition.
So the biometrics for law enforcement in public
spaces, but with very large carve outs, actually
for national security reasons, plus social scoring, plus
exploitation, subliminal and other exploitation of people where
you have physical or psychological harm, that’s where
they started out.
And then the lawmaking process progressed, and the
European Parliament actually came up with a very
much expanded list of prohibitions, among them the
ones that we now presume are included in
the final compromise agreement.
So ban of emotion recognition on the
workplace, ban of biometric categorization for law
enforcement, of course, the social scoring, of
course, the exploitation, and then the biometrics
for law enforcement on public spaces.
But there we see there will be carve outs.
We do not know how those look like yet.
But if we think back to the original
proposal, it might be some exceptional cases, like,
for example, preventing terrorist attacks, looking for kind
of like people that have fled, or people
that are looked for internationally, maybe for finding
missing children or victims of crime.
So that’s what we expect will be the carve
outs, and subject, of course, to authorization by the
national competent bodies in the justice system.
So that’s our expectation.
But we do not know for sure.
Will it be enforced on the national
level or will each nation be able
to decide how strict those frameworks are?
Or it has to be unified.
So this is also something, there
might be some national exemptions.
What we know when there are the bans,
they will be kind of like uniform.
So, for example, let’s take the easy
example, social scoring ban, like it’s written
there for the predictive policing parts, and
the biometrics for law enforcement in public.
What we expect is that there will be an exemption
for the member states to decide if they want to
allow this, and then some criteria that are in the
AI act when it shall be allowed.
So maybe for prevention of terrorist attacks and
so on and so forth, and then a
process specified that needs to be implemented.
So prior, for example, granting of a request by a
judge would be something that we presume will be required.
So that’s the process and the lines we
expect, because we’ve seen this in the first
draft text and assume that something along those
lines has been agreed in the compromise, but
still on that kind of like particular aspect.
So the carve out, actually, we have
very little information how those look like.
I see, yes, because one of the crucial parties
to provide, to let citizens and people who are
subject to those technologies are having transparency.
Right.
And also they can decide how their
data, how their image is being handled.
And I wonder, how are enterprises at
this moment, where are they leaning towards
in this whole transparency aspect?
That’s a little bit hard to grasp.
I mean, what we see, for example, in Austria,
is that the austrian government has declared that all
AI applications in public administration will be transparent in
the future by informing the user or by marking
that such kind of like AI systems are in
place and they want to do a registry of
AI applications that are running in public administration.
We see the same trend, for example, in
Germany that is considering the same route.
Then it gets very fragmented, of course, because in particular
in the media industry, as you can assume, that is
a topic and there are some developments, but we see
them in some member states, maybe in some disciplines.
For example, there was a declaration from the
Paris journalists stipulating their approach to use of
AI, and in particular Gen AI.
We see some media conglomerates voluntarily pledging to
transparency and informing users in an adequate way.
So this is something we see that
is adapting in the market right now.
Industry standards are developing, and this kind of
like is a trend that will go on.
When we think about the AI act in
there, we assume, because it was not actually
a huge controversial topic, information obligations for people
that are exposed to emotional recognition systems where
they are not banned, or biometric categorization.
I mean, we know in many towns and
cities, for example, already in the European Union,
you have kind of like those markings, supervalence
area and something like that.
So we think that might increase going forward if
more of those systems are used in public.
But of course, subject to the transparency obligations
that will then be in the AI act.
It’s interesting what you mentioned about the large
tech, the big tech companies entering voluntarily into
such agreements, and let’s say, leading the conversation
that the dialogue about this.
But there is also this view from, I think it
was also discussed, of course, it had to be discussed
on the AI summit, safety summit, that those companies shouldn’t
be regulating themselves and that you need a referee who
will actually see if they are acting in the good
intentions or it’s just for the sake of ticking the.
It’s actually, there was even a non
paper circulating, I don’t know, two or
three weeks ago before the final trilogues.
It was an initiative supported by Germany,
France and Italy, where they support a
concept they named mandatory self regulation.
So actually the industry crafting their own rules to abide
by, which are some sort of mix between mandatory self
regulation, which actually is quite a paradox on. Right.
Because it’s like kind of like taking it on the
funny side, like letting the fox design the hen house
and then not even making it an obligation. Right.
So of course there were companies in favor of
this approach, but the large portion of the industry.
So for example, SME alliance, many startups were
actually very much rejecting that approach because it
doesn’t create a level playing field if you
let a few players draft their rules.
And the industry needs to abide by it
if you want to put this very drastically.
And also, you cannot distinguish trustworthy players
from other players because the trustworthy players
invest in good products, good governance.
Untrustworthy players do not invest so much,
but still have no consequences to fear.
So you’re actually not creating a level playing
field, but again, a very bad situation for
companies doing their best, adhering to the standards.
So that’s why this was very much rejected and I
believe also not taken up into the final compromise for
the AI act, because in there you find mandatory obligations
for general purpose AI and model providers.
So moving away from the concept
of self regulation without consequences.
Yeah.
Which brings me to something which is in line
of what we’ve been experiencing and seeing all over
the news over the last few weeks.
They open an AI saga.
Yeah, of course you knew that.
I will ask about it.
So how, how was it supposed to work?
It was supposed to have the, in theory, it still
has the NGO part, which was supposed to act for
the benefit of the humanity, as they say, although it’s
very vaguely defined as well, what happened there.
We’re also commenting from the outside, but what we
saw is that there was over a weekend, I
think, four times changing, of the CEO of OpenAI,
which is a very short time span and does
not really create trust for the markets.
And it was actually very bizarre that you remove
a CEO, I think on a Friday evening.
It was then you put in charge Mina Murati.
So the CTO, actually a woman responsible or kind
of like looking after the technical and ethical aspects,
then removing her, replacing her with someone different, and
then bringing some Altman back again with all the
stories around, like we know the news reported about
the conflict between commercial, of course, and the open
source humanity approach then Sam Eltman’s investments in various
startups, the fear he might build another OpenAI employees
pledging to follow him to Microsoft where he was
for two and a half or very shortly. Right.
So yes, an example how probably not to
deal with it in a public crisis.
It will be interesting how things develop because I
think the story went on with the board and
the new board at OpenAI, which was shuffled around
and I think is still all male, or at
least it was criticized some weeks ago.
So I think they have some homework to
do on their side to sort things out
and also sort things out for public perception.
I think it didn’t really create trust in the market if
you have such a huge player many companies rely on.
Right.
I think a lot of companies are working with
GPT 3.5 and four and Chat GPT and Delly
and such things going on in the management.
So there might be some things to repair. Definitely.
But I’m worried about the aspect of.
Of the commercial part. Right.
Like the shifts which happened in the board
and what were their intentions and the change
of the cap for the investors. Right.
Like trying to change the structure so that
it’s more commercially available to the market.
It’s a bit of a contrast for
what they were initially planning to do.
So I think, as you mentioned very rightly,
it’s actually a problem that’s historically to OpenAI.
Right.
Because they started out as open
source research for the public good.
Then they had a funding crunch.
Then of course comes in the commercial part
with the huge funding round and Microsoft.
So then you have commercial interests, of course rivaling to
the original mean, judging from the outside and knowing nothing
more specific, I would say that the commercial part is
in charge now and has won because of course you
will always have a conflict of interest.
If you want to commercialize as fast as possible
with kind of like a lot of market share,
then kind of like looking at slowing down, for
example, developments, taking a step back, maybe delaying releases,
doing 1234 or five more rounds.
So you always have that conflict of interest
because it’s one of resources, of financial resources.
We know that those models need personal resources,
of course, because you need the developer for
kind of like capacity for both missions and
sides and also environmental resources.
So in judging from the outside, I think
the commercial part is now in the lead.
And of course they might still have
the original mission to some extent.
But if we probably look at the
resources, they will be significantly shifted.
And of course it also makes sense, right, because
they received a huge investment and every investor wants
to recoup his investment at some point in time.
So I believe the pressure might be there.
Also there were rumors about Sam
Eltman looking for additional investors before
he got removed and instated again.
So also speaks to the need for funding and
would also make sense looking at their development.
And we do not know what they’re developing next. Right.
Maybe GPT five, maybe improving GPT four.
But we know it takes a lot, a lot, a lot of resources.
So that’s my humble guess from the outside.
Yeah.
Which doesn’t paint a good picture, because in the
end we will be left with the monopoly again.
And as we know, monopoly is never
all just few players in the market.
It’s never good for representation of the
whole interest of the whole population.
So that actually concerns me.
Know you mentioned it, monopoly or oligopols.
Because in Europe we have two bigger companies,
so it’s mistral from France and it’s Alef
Alfa in Germany, small austrian startup, very small.
One company called magic, also going in
that direction, hopefully more in the future.
But we see a very huge lack of
diversity, namely diversity of players, because we only
have two european players, to be honest.
And also a lack of diversity
in the models, in the approaches. Right.
Because what we believe or what we see is
the trend is going to smaller specialized models, because
you need something different for the medical industry than
for the financial industry, than for HR, for example,
or you go to biometrics or critical infrastructure.
So those are all separate industries with
separate demands, separate lingo, separate way of
processes and thinkings and everything.
So we see that this is something that probably
where the demand will increase in the future.
And we hope that developments are going in
this direction to have more diversity, to choose
from more different players for the general applications,
but also from industry specific players.
Yeah.
Again, the government should do some bits
to enforce it in a way, right?
To help and to support, but also
to enforce a room for more players.
Yes, I think there is actually room.
I hope a lot of developments are ongoing, at least.
I see a lot is taken up with the startups
and the scale ups trying to build solutions here.
If we look at, for example, the recent
funding round of Aleph Alpha was 500 million.
And talking to the startups and looking
at the statistics of this year in
particular summer, there was a funding crunch.
So I think we have the talent, but we need
to actually provide them with support, but also from the
private side, because 500 million is not a small amount.
Right.
So this is one thing, and I
think a huge opportunity lies in collaborating.
If you put five gpus here, five gpus
there, it’s going to help no one. Right.
So you really need a lot of compute
power and easy access to that compute power
for the companies developing these models.
And this was actually announced, I think, in the
state of the union by Fondelein, that they want
to make available these capacities to startups.
So we hope this is going to happen
fast and very unburocratic and in a dimension
that really helps those companies to develop, because
that is what they need, compute capacity and
hardware infrastructure to really train the models and
get something going technically and funding, of course,
for the talent and to scale very quickly.
Let’s talk about a little bit more about
the responsibility for when things go wrong.
So, for example, I read this thought process and
picture of AI powered garage door, for example, the
algorithm malfunctions and it hurts the neighbor’s child.
So who would be liable?
Would it be the coder, the developer, the company?
How should we tackle this?
Being a lawyer, you know, you will never get a straight
answer out of me to the question who is liable? Right.
So it depends on who is my client?
No, but in this case it depends on where
the problem was and what the situation was. Right.
So did the robot malfunction?
Because it was poorly handled by the neighbors.
So did they follow the operation instructions?
Did the software malfunction?
Yeah, it’s assumed that the software malfunctions, yes.
And who controls the software? Right.
Is it a software that needs continuous updating or
do the kind of like owners of the robot
control the software and are responsible for updating?
Did someone forget an update?
Did someone not provide an update?
What’s about cybersecurity?
All those questions come into play
and bring me to one important
development, the liability regulations for AI.
Right, so this is something that has a little
bit slipped out of the public eye because we’re
all very much focusing on the AI act.
We’re happy that the compromise was achieved.
But in parallel, there are works
going on on the liability framework. Why?
Because most national liability regimes in
the European Union have a problem
in dealing with software liability.
So if you look at the product safety regulations or
product liability directive, which was implemented some times ago, a
product is defined basically as something that is hardware, so
does not apply to software at all.
This is something that will change in the future.
So expansion of product liability to software,
which will be then a completely different
game for all software providers, including AI.
So not only for AI, but also
applying to providers of AI systems.
And this will be very closely linked
to the obligations under the AI act.
So that is why I think it’s not bad that you
have the AI act where you can say, okay, I complied
with the AI act, which is the product safety regulation for
AI, and since I am compliant, it is not my liability.
Currently, it’s very difficult to allocate actually
liability to the parties in the liability
regimes and to prove it, especially that
you don’t know what went inside. Right?
Like in development of the
software, exactly what went inside.
And to prove kind of like whose negligence it
was and that that defect actually caused the malfunctioning.
So the software might have ten defects you don’t know.
And just to prove that there was this single
one defect that was substantial for that malfunctioning.
And if you look at the requirements,
at least in austrian liability law and
throughout the European Union, there are differences.
But Masominos, it goes in the same direction.
You need causality, you need to link the
damage to kind of like the source of
the damage then to the negligence.
And this kind of like link or chain of
requirements breaks at several parts when it comes to
software currently, and that shall be remedied by the
product liability, which does not need actually any negligence.
So that would be very interesting
to see how that turns out.
And very important for companies to keep track of
these developments, despite everything that is going on.
That is very important.
The point I want to make for liability is that this
is a topic that needs to be on entrepreneurs radar for
the future, because the excitement over the AI act is a
little bit overshadowing the developments in this region.
And we expect to see a revised product liability
directive in the future that also includes software.
So of course also includes AI and will
also change the level playing field and the
game when it comes to AI liability.
So on the one side, preparing of course, to
defend oneself against liability, but also, of course, if
you are a victim or you have suffered damages
in connection with AI, to making damages, and to
receiving those damages before the courts, right now there
is so much uncertainty that it’s disadvantaged to smaller
players who cannot afford legal advice and they end
up not doing innovation.
Exactly.
That is the point.
Because if you look at the consultation process
for the AI liability frameworks, this is the
point that is mentioned most by entrepreneurs.
They are afraid or very hesitant to put their products
in the market or to develop it or to apply
and adopt it, because they are afraid of a liability.
And yes, it’s currently a very nuanced, complex
question that cannot be answered with certainty.
And also, there is no way to cover this, right?
So you would also probably not be able to cover this
very good with insurance and so on and so forth.
So this is actually also a huge obstacle
to AI adoption in the european market.
So here I also hope that clarity and rules
that make sense in regard to AI damages will
help to solve these problems for both parties. Right?
So for entrepreneurs to have more security to develop
and deploy their products to prevent liability, and also
for people that really are damaged, that they have
the possibility then also to assert those damages.
Yeah.
And we need to act fast, right,
because the whole technology is developing exponentially
and we cannot lag behind other players. Okay.
And there is also this part of IP, right?
There have been numerous cases of artists suing
or trying to sue the companies who are
developing those technologies, claiming that their art has
been used to feed the algorithms, and basically
the whole system is copying their work.
How can artists and how can people
be protected, legally protected work and not
be afraid that someone else will steal
their intellectual property and benefit from it?
That’s actually a very good and very tricky
question in the times we are in, right?
You mentioned already the lawsuits alleging copyright
infringement, trademark infringement, competition law infringements and
so on and so forth.
I think none of these lawsuits is finally decided
yet, but we already have some pointers looking at
the I Act at copyright in Europe, which is
very different to copyright in the US, for example.
So the I act actually only touches
on copyrights, I think in two places.
The one place is where the general purpose AI
model providers are obliged to publish a summary of
content they used for training of their models.
So one reason of course, transparency, another
the reason to support artists and other
parties whose works may have been used
without permission to of course assert damages.
And the second point where it touches
upon this issue is with the machine
readable marking of generated output.
So to make sure it’s not
confused with human generated content.
The AI act does not solve the copyright question.
And we have different copyrights, so we have the
copyright directive harmonizing to some level, and then you
have the 27 implementations, so you have nuances in
the european copyrights to be aware of.
So doesn’t make the question and the topic very
easy, how to be safe for the future.
So one thing is of course to think about no crawly eye.
So kind of like preventing scraping from your website.
This you need to do in a machine
readable format, because what happens is that crawlers
go over the entire web and collect information.
And if you have a no crawl AI machine readable
plugin or insertion in your website, you can prevent that,
which might be a slight disadvantage in SEO.
So trade off to be made here.
This is the one thing.
And then on the other side, of course, also using
AI to find deep fakes to distinguish manipulated content.
So kind of like working with
technology to distinguish fact from fiction.
In this new digital rights world, what I think may
be a question for the future is who is the
author or who owns the works generated by these tools?
Which is also something that
is answered very differently globally.
So you have even some countries considering the
tool or the AI to be the author,
which I think is a little bit difficult. Yeah.
Who collects the royalties, right, exactly.
Which needs to be a human, at least in
Europe, because only humans can have rights and obligations.
Even though we blame the algorithm very often
it always translates back to a human agent.
So this is why I don’t think it makes much
sense to say the author is Dali or midjourney, right.
Because at least now they cannot collect
any royalties or grant you any rights.
So it’s always natural or legal person standing behind.
And in Europe we have human authorship, so
it needs to be attributed to a human.
At some point in time they considered something like
a machine personality, but this was not kind of
like taken up far and is not on the
agenda to institute something like that.
Yeah, but some countries allow that.
And actually there are robots which have was,
I think, in somewhere in Asia, right?
Or maybe Arabia, I don’t know, Asia, but I think
it was some considerations, let’s put it like that.
Consideration, I think in South Africa.
I don’t know, maybe.
I think it was Sophie Robert. Right?
I don’t know Sophie, but I will
definitely look her up after our her.
I think she’s considered whatever being with rights.
I don’t know if you agree, but the
CEO of Alphabet Google says that AI could
be more profound than both fire and electricity.
Do you think people grasp this
idea or they still undermine that?
We’ve entered into this quite transformational phase where things are
happening so fast that sometimes feel like I cannot catch
up with how many developments are being made, how many
new releases of new models are being made, and one
is much better than the other.
And it’s never been like that. So fast. I mean. Yes.
So I also feel things are getting
faster and faster, as you mentioned, new
releases, new developments every day.
Plentiful.
I think AI is really a big game changer and
I myself feel very happy to be alive at this
point in time where we still have the opportunity to
define the way of the future, define the red lines,
open up the spaces for opportunity.
So this is something I’m very grateful for and which
is my passion where I spend a lot of work,
but I think it translates all back to humans.
So it is for me, AI is still currently a
tool and it is a tool that we use.
And I think human agency is important now and
will become even more important in the future because
we use these technologies, we benefit from these technologies,
we are responsible for these technologies.
And this is something we need to keep
in mind because I think one of the
huge risks is technology over reliance.
So blaming the algorithm, just accepting every
decision proposal, this is something I’m actually
a little bit afraid of.
So here I think we need to do a lot
more in the future and education, because of course I
think we all, or the opportunities of the technology are
very much known and very much promoted, but we also
need to look at the limitations and to solve the
right problems with the right technology.
So I don’t think generative AI
will solve all our problems.
There are other kind of like technologies, simple
machine learning techniques and so on and so
forth, that can even be more impactful in
certain industries and help us actually to solve
the challenges that are on our agenda.
So climate change, if we figure out
how to use this technology, also sustainably
so green AI versus ready eye topics.
And if we figure this out, I think we have
a very great path ahead and I think we shall
collaborate to achieve this to the most possible extent.
And then I actually see a bright
future because I’m an optimist at heart.
So maybe some would say, I don’t
know if you’ve seen the Anderson techno
optimist manifesto and he’s comparing.
Yeah, he’s seen it.
Okay, so what do you think about know?
I think it’s for everyone bad to trust blindly because
you might end up in a very bad space.
So trusting blindly, even technological developments, I think is for
me at least not the right path to follow.
I think we need to develop and kind of like
create trust because there is a lot of fear in
the market also from terminator and suicide robots.
So I think we need to start create trust
in this technology awareness, but with open eyes, right?
Not with closed eyes following
down some cliff and wondering.
So optimism, yes, but with open eyes, with
cautiousness, like you mentioned, we should help to
educate people about the opportunities as well.
But what do you think about those
people who may be left behind?
May not otherwise adjust to what the work will
look like in the next few years, even.
How should we make sure that they
still provide value and they feel valuable?
I think that’s also among many challenges, but
also one of the greatest challenges, because what
I think is that AI will and is
actually already disrupting the workplace.
So tools that increase efficiency very
much in many, many aspects.
So what I think is upskilling will
be a huge topic for the workforce.
So educating them on how to use AI, because
the people that use AI will not be outperformed
or left behind, but the people who don’t use
AI, and that’s the one thing.
So this is kind of like more on the
entrepreneurial side and thinking from a citizen’s perspective.
I think we need to onboard people as broad
as possible and not leave anyone out, right?
So starting in the schools with the children, with
AI competency and media competency, to be equipped for
the AI powered future, but not to stop there,
but to go further and also take everyone into
consideration, the people that are in retirement, for example,
and so on and so forth.
Because also for them, AI can be a huge enabler.
If you’re kind of like, you need assistance in your kind
of like late days and so on and so forth.
And also you might be a victim
exploited by deep fake voice fraud.
For example, your niece calling and asking you to
transfer ten k to a very exotic country. Right?
So also you need to onboard the whole society
and not leaving anyone out based on any criteria.
So not age, not kind of
like economic circumstances, no geographic regions.
We need to onboard everyone.
Again, a very broad, very difficult thing to do.
Thank you for playing part in making sure
that countries and governments are doing their bit.
Just to wrap up, what advice would you have for
business leaders who may want to at this point, struggling
like we all, to navigate this landscape and trying to
strike the balance between innovation and compliance.
A difficult situation for business leaders, but I
think the easiest way is just to start.
I think starting is the hardest
point, but the most important point.
So starting to familiarize yourself with this technology,
learning about it, experimenting also with the tools,
and then embarking probably a little bit more
swifter if you might be subject to the
I act on the governance journey.
So learning what is the roadmap ahead in positioning
yourself and your company in that regulatory landscape.
And I think when you do that, things shall
unfold actually very naturally because you might even find
out that you’re doing fine and everything’s great and
you just need to kind of like upskill your
workforce and think about generative, you know, to not
have any business secrets leak into the market, as
it happened with Samsung or the new gpts.
So maybe you just need to work on that angle.
If you find yourself in the AI act somewhere, you have
two years of time, starting from the beginning of next year,
to line everything up, which should be sufficient if you attend
to it swiftly and in a good manner.
And most importantly, educating yourself and then
taking all your employees on the journey.
Because I think this will be the key
factor deciding which companies can actually thrive and
will really, really have a competitive advantage.
Because they use AI, they manage
the risks effectively in their organization.
So generative AI or compliance risks, and they have
the differentiation in the market because they say either
they are compliant under the EI act and have
a certification that creates trust and demand in the
european markets and markets abroad.
They create branding for themselves,
attract customers, attract employees.
So I think these things are not
to be underestimated as an enabler.
And yeah, it’s better to start now.
And starting is the hardest point.
From there, I think everything will work
out its way, step by step.
I completely agree.
When we speak with lots of business leaders,
they are always afraid of being, feeling that
they are being the guinea pigs. Right?
But it’s not like that.
You have to join the race because like you said,
if the people who are not using the AI versus
people who are using it will be the ones who
will lead and who will provide value at scale. Yes.
And there’s also kind of like nothing to be
ashamed of because there are many people out there
throwing around LLM, generative AI foundation model, I don’t
know, some, whatever kind of like marketing words.
I think very few of them know what they’re speaking of.
And just sorting those basics out demystifies a lot
and then just starting slowly, there are a lot
of coachings, even executive coachings we do, just to
take you and kind of like along the way
from a technical perspective, from a governance perspective.
And I think that’s the most important
start because the executive agenda actually determines
the path of the company.
And if you sort that out, it’s easily breaking
down what needs to be done, what is safe,
where you have gaps, and it’s a great roadmap
to map out everything that needs to be done,
or even not where you can kind of like
have competitive advantages, save cost, without any additional burdens.
Yeah, very interesting.
And is there any project or any initiative
we should be following which you are part
of and you are excited of working on?
So of course I need to recommend
to follow the european AI forum, so
of course involved in the policymaking process.
But now for us, we transition to implementation because
we see this is going to be the most
important part for the next two years and also
to keep out looking for an initiative we will
be launching actually to support everyone in the markets
with understanding the AI act, finding its position, educating
and onboarding as many people as we can.
So watch out for the European Eye Forum, our announcements
and I also regularly post on LinkedIn and my blog
and we’ll keep you informed about all the developments.
And there is a lot actually we prepared coming
in particular starting 2024 to ease the burden of
AI governance and getting ready for the AI act.
Yeah, definitely I follow you on LinkedIn and I
learn a lot, so definitely I will leave the
links for others to follow because it’s really valuable.
Jeanette thank you.
Thank you so much for all
this knowledge and all this wisdom.
And definitely I am more relaxed knowing that people
like you are working towards regulating such a tricky
field because things are changing so fast, so much
and you have to consider so many edge cases.
Thank you very much for inviting
me to your podcast, Kamila.
I’m so happy and very proud to be a part of
this great series and can only encourage all the other leaders
to come have a conversation with you because also I learned
a lot and I’m very thankful for our exchange and I
hope we keep in much for the future.
Definitely we will. Thank you.