Aída Ponce del Castillo, researcher: “CEOs like Sam Altman of OpenAI can negotiate with a state as if they themselves were a state. What are the consequences for democratic governance?”

Aída Ponce del Castillo, researcher: “CEOs like Sam Altman of OpenAI can negotiate with a state as if they themselves were a state. What are the consequences for democratic governance?”

“The current industrial revolution is invisible, immaterial and is taking place at a much faster rate. The risks of certain software are not readily apparent. Trade unions and workers have to keep a critical eye on all emerging technologies. We need to know how to ask concrete and useful questions,” says Aída Ponce del Castillo, pictured here.

(Marta Checa/Equal Times)

The disruptive potential of new technologies has been a constant throughout history. The main difference between the present moment and previous turning points is the speed with which changes are currently taking place (artificial intelligence, AI, specifically) and our inability to understand them (without study). This leads to a major imbalance between the developers of this new technology and the rest of the world. European Union lawmakers have begun the process of regulating AI. How do they plan to protect citizens, and more specifically workers, jobs and the environment?

We spoke with Aída Ponce del Castillo, a lawyer specialising in science and technology, and a PhD in Law and researcher at the European Trade Union Institute’s (ETUI) Foresight Unit, to find out how well equipped the current legislative proposal is to respond to the challenges of AI, let alone of the new technologies that will follow.

 

In case any doubts remain, if we are given a choice between ethics guidelines, self-regulation and developing targeted legislation, what should we insist upon when dealing with digitalisation, and specifically AI?

Laws and regulations result from a democratic legislative process in which rights and concrete obligations exist for the people to whom such laws apply, in this case EU citizens. Ethical guidelines are not laws but voluntary documents based on agreed ‘values’. Different groups within society may agree or disagree and may interpret these guidelines in a variety of ways. Self-regulation is also voluntary and unilaterally drawn up.

Neither self-regulation nor ethical guidelines work when it comes to labour. If a company collects a worker’s data, their facial traits, physical features or biometric data for example, and uses it to spy on them or evaluate those features for any purpose outside of the employment contract, ethical guidelines will not enable those workers to raise their voices or exercise their rights. Only laws can do this. Ethical guidelines and self-regulation are non-enforceable.

Since the beginning of June, the European Commission [the regulator] has been promoting codes of conduct for AI based on commitments that it makes to the actors for whom the code is intended. In the absence of finalised European legislation, the industries in question are proposing a series of commitments, a gentleman’s agreement of sorts, based on industrial rather than ethical policy. But we don’t know what this obliges or commits them to.

Between the General Data Protection Regulation (GDPR), already in force, and the European law on artificial intelligence [which has just taken a big leap forward with the adoption of the European Parliament’s final position, allowing the start of discussions with member states and the Commission, the trilogues], which could enter into force in 2025, at the latest in 2026, how are people being protected in the current era of digitalisation? Does any of this legislation have a social component?

No. The regulation is intended to promote and expand the single market. That is its scope - taking into account respect for the fundamental rights of people living in Europe. The European Commission’s idea is to provide a legal framework for artificial intelligence systems to be bought, implemented and developed, ideally in Europe, and to increase the potential of Europe as a producer rather than simply a consumer of artificial intelligence from other international players like the United States.

For this digital market to be successful, European companies must be able to flourish and develop, and foreign companies must be able to invest with economic security here in the European digital market. Thierry Breton [European Commissioner for Internal Market and Services] is in charge of promoting this digital single market. In my opinion, the AI Act has helped to transform it into a true single market for data. The data, in other words, is driving the economy.

Does the new legislation protect or envisage protecting workers in their workplaces (Article 22 of the GDPR establishes the right to explanation for decisions based on automated data processing)?

The European law on artificial intelligence is not aimed at the world of work. It is not a law that applies directly to employers or employees within their employment relationship. Rather, it is a law that applies to suppliers/deployers and importers of artificial intelligence systems. Workers are not even mentioned in the law.

The role of the employer needs to be clarified, specifically whether they are a producer of artificial intelligence, an importer or a deployer. Depending on the category they fall into, employers have different obligations under this law, specifically how they implement artificial intelligence systems – but not how they implement them with regard to their workers or consumers. This is precisely where the sticking point lies. There is no provision that requires employers to take into account employment relationships or their workers when implementing artificial intelligence systems. It doesn’t say “these are your obligations, use it with consumers, with patients, etc.”. While the Parliament has proposed amendments requiring that workers be informed and consulted before an AI system is implemented within a company, these amendments do not specify the process.

The EU law on artificial intelligence does stipulate that systems must be implemented in compliance with obligations arising from existing laws, in addition to obligations under sectoral regulations. Examples include the Health and Safety Directive, the Machinery Regulation, etc., all regulations that have to do with labour law. [The Parliament has moved forward on a ban on the use] of artificial intelligence systems that calculate or predict human emotions and behaviour, including in the workplace. [If accepted in trilogues, it would be a] way to build in some protection for citizens in general and workers in particular.

On the other hand, high-risk systems are subject to legal obligations that are operationalised by standards. These standards provide concrete details for the implementation of artificial intelligence systems so that implementers comply with obligations, have more knowledge of what they are implementing, and consumers and workers are protected. The main caveat here is that these standards are produced in closed committees subject to membership and to self-conformity assessments by the implementers themselves. In other words, it’s the implementers who decide whether they have complied with obligations, and the implementers who certify them.

By way of foresight, Parliament added that the Commission should not be prevented from proposing specific legislation on the rights and freedoms of workers affected by AI systems. This shows the need to establish concrete legal standards for protection at the workplace.

Has any thought been given to protecting work? I’m not talking about stopping optimisation, but about finding the necessary balance between optimisation and social impact. In other words, that this impact, which has a cost (whether in terms of continuing training for the worker, unemployment benefits for those who become unemployed, as well as an environmental cost) should be part of the equation (so that those who cause it pay their share).

Lawmakers are interested in boosting and consolidating the internal market on artificial intelligence. Full stop. There is no intention of linking the digital agenda with the Green Deal, no intention of scrutinising how artificial intelligence is produced, who is being trained with ChatGPT, how much water servers consume, what the CO2 emissions are, etc. There is no intention on the part of legislators to understand, regulate or limit the supply chain that underpins the entire production and distribution of artificial intelligence.

The European regulation on artificial intelligence is designed only to regulate the high risks of artificial intelligence systems. The list is currently being negotiated with the European Parliament. Currently under discussion are biometric systems and whether they can be present in public spaces, social scoring systems, etc.

Should the rest of the world be worried when big tech CEOs [like Sam Altman, founder of OpenAI, the company behind ChapGPT] appear before the US Congress and offer US lawmakers a roadmap to follow?

Absolutely. [Chief executives like Altman] are creating a new layer of AI governance. They are working with the European Commission to establish an ‘AI Pact’, a voluntary pact between major EU and non-EU actors. And it doesn’t stop at lobbying. Industrialists and producers are making ad hoc compromises with legislators. They have the capacity and the power to set limits on regulations and chart very precise courses. They can negotiate with a state as if they themselves were a state. What are the consequences for democratic governance?

Two examples: as we’ve learned from Time magazine, Altman has insisted to European legislators that foundational artificial intelligence systems [such as ChatGPT] should not be banned or regulated in the ‘high risk’ category. And these efforts have been successful. He was also in discussions with [European competition commissioner] Margrethe Vestager to agree on a code of conduct for the generative AI industry in the European market.

How can you make democratic laws when such powerful actors are involved?

It seems that the Big Tech companies are no longer even trying to take advantage of trade agreements in order to limit states from regulating their activities in the public interest when it comes to data management and algorithmic transparency.

Indeed, but they are also going one step further. They are essentially making agreements where they set the rules of the game.

How can we protect ourselves and defend our rights when digital and AI literacy is so limited or non-existent?

When it comes to the labour movement, we don’t need trade unionists to become computer engineers. Of course, there is always added value in unions become more computer and AI literate. But let’s not forget that from their very inception trade unions have faced technological transformation. Their very origins lie in industrial revolution.

Today’s industrial revolution is invisible, immaterial and occurring at a much faster rate. The risks that software can cause are hidden from the human eye. It’s like working with tiny particles. So you have to put a system in place to identify what the potential risks are, to what extent and in what way they can impact human beings.

From my point of view, both trade unions and workers have to keep a critical eye on all emerging technologies (because today it may be artificial intelligence but tomorrow it may be neurotechnology, which seems to me to be much, much more dangerous). We have to know how to ask concrete and useful questions: what does this technology do, how does it specifically and directly affect human beings, does it use workers’ personal data, how does it use it, and how can I find out more about the way it’s used? These are the questions we must be asking.

Simply asking these questions is already a big step because it empowers workers and unions. Not knowing about a subject can’t be an excuse to stay silent. But everything becomes more complicated when employers play the copyright or trade secret card.

This brings us to another crucial point: are human rights and labour rights a burden for innovation and technological development?

Let me answer with another question: how have pharmaceutical products been developed? The Law on Pharmaceuticals obeys and respects human rights. A law on artificial intelligence, which is basically a law on software, should do the same. What’s the difference? Why do pharmaceuticals obey a specific legal framework that also respects human rights? Why not other chemical, biological or artificial products? It’s exactly the same. Human rights are a framework from which other rights must arise. They provide a frame of reference for legislation in a democratic society to function in a respectful way.

Is it possible to legislate digitalisation and AI, protect human rights and at the same time be a leader in artificial intelligence, or at least not be left behind in the race? Or will this require some level of international consensus?

I think it will. There has to be harmonisation of international guidelines. Returning to the subject of pharmaceuticals: when you buy aspirin in Europe, the ingredients, the health risks and the prohibitions are on the box. The instructions are the same, whether you’re here or in China. If we managed to do that with millions of pharmaceuticals, why haven’t we done something similar with artificial intelligence or with algorithms?

When the European regulation comes into force, we are going to see the effect of the lack of transparency, of the lack of real rather than notional regulation. This European regulation is a start but it only provides notional protection. Today, artificial intelligence is in everyone’s hands, whether or not [users] know the consequences, whether or not they know how to use it, whether or not they know if it’s prohibited. Everyone uses it. The next technology that comes along will be the same: universal, instantaneous – as well as disruptive, like all technology. We started using ChatGPT from one day to the next. I don’t think the law on artificial intelligence is equipped to regulate something new that we don’t know about today. That’s what I mean when I say that this protection is notional.

This article has been translated from Spanish by Brandon Johnson