Artificial selection

filling in the gaps – Aida Ponce Del Castillo

Stronger legislation than that envisaged by the European Commission is needed to regulate AI and protect workers.

In joint man-machine problem solving, the worker must make the decision (PaO_STUDIO/shutterstock.com)

Artificial intelligence (AI) is of strategic importance for the European Union: the European Commission frequently asserts that “artificial intelligence with a purpose can make Europe a world leader”. Recently, Commissioner for the Digital Age, Margrethe Vestager, once again insisted on the “enormous potential” of AI but admitted that there was “a certain reluctance”, a hesitation from the public: “Can we trust the authorities who put it in place? Technology had to be trusted, she said, “because it’s the only way to open up markets for the use of AI.”

Trust is indeed at the heart of the acceptance of AI by European citizens. The recent toeslagenaffair (compensation case) in the Netherlands is a reminder perils. Tens of thousands of families have been flagged as potentially fraudulent applicants for childcare benefits, without any proof, and forced to repay, driving many into poverty, some into depression and suicide. This was all the consequence of a self-learning algorithm and AI system, designed without checks and balances and not subject to human scrutiny.

In its current form, the IA Regulation propose by the commission last April will not protect citizens from similar dangers. Nor will it protect workers. In its rush to advance AI and position itself in the global AI race, the commission has overlooked workers’ rights. The envisaged legislation on AI is designed in terms of product safety and, as such, employment falls outside its legal scope.

The only reference to employment is in Annex III, which lists “high risk” AI systems. These include recruitment and selection, selection and evaluation of candidates, elevation or termination of work-related contractual relationships and assignment of tasks, as well as monitoring and evaluation of performance and behavior of people in these relationships.

.

The regulation would not, however, provide any additional specific protection for workers or guarantee the safeguard of their existing rights, despite the uncertainty that AI will generate in this regard. The worker protection capability of the General Data Protection Regulation (GDPR), although in force for nearly four years and despite its potential, is not yet fully utilized.

Gaps to fill

Along with other emerging technologies, such as quantum computing, robotics or blockchain, AI will disrupt life as we know it. The EU can become an AI world leader only if it remains true to its democratic and social values, which means protecting the rights of its workers.

To do this, the shortcomings of the AI ​​Regulation and the GDPR must be addressed. Seven aspects deserve more attention:

Implementing the GDPR in the context of employment: Full implementation of GDPR rights for workers is one of the most effective ways to ensure they have control over their data. AI relies on data, including the personal data of workers. Workers must actively use the GDPR, ask how this data is used (potentially for profiling or to discriminate against), stored or shared, in and out of the employment relationship; employers must respect their right to do so. The Commission and the European Data Protection Supervisor should issue clear recommendations insisting on the applicability of the GDPR to work. It may also be necessary to determine the role that labor inspectors could or should play.

Further developing the “right to explanation”: when decisions supported by algorithms – processing of sensitive data, performance evaluation, distribution of tasks based on reputation data, profiling, etc. – negatively affect workers or are associated with bias (in design or data), the right to explanation becomes an essential defense mechanism. A specific framework based on Articles 12-15, 22 and recital 71 of the GDPR must be developed and apply to all forms of employment. In practice, when an algorithm-supported decision has been made that negatively affects a worker, such a framework should allow the individual to obtain information that is understandable, meaningful and actionable; receive an explanation as to the logic behind the decision; understand the meaning and consequences of the decision, and challenge the decision, vis à vis the employer or in court if necessary.


We need your support

Social Europe is a independent publisher and we believe in free content. For this model to be sustainable, however, we depend on the solidarity of our readers. Become a member of Social Europe for less than 5 euros per month and help us produce more articles, podcasts and videos. Thank you very much for your support!

Become a member of Social Europe

Purpose of AI algorithms: in a professional setting, having access to the code behind an algorithm is not useful in itself. What matters to workers is understanding the purpose of AI system or the algorithm embedded in an application. This is partly covered by Article 35 of the GDPR, on the obligation to produce data protection impact assessments. However, additional measures are needed to ensure that workers’ representatives are involved.

Involve worker representatives when conducting workplace AI risk assessments, prior to deployment: Given the potential risk of misuse, as well as unintended or unintended harmful outcomes from AI systems, employers should be required under the proposed regulations to conduct technology risk assessments prior to their deployment. Worker representatives should be systematically involved and play a role in characterizing the level of risk arising from the use of AI systems and in identifying proportionate mitigation measures, throughout the life cycle. Risk assessments should address general cybersecurity, privacy and security issues, as well as specific associated threats.

Fighting intrusive surveillance: Workplace surveillance is increasingly replaced by intrusive surveillance, using data related to the behavior, biometrics and emotions of workers. Given the risk of abuse, legal provisions are necessary to prohibit such practices.

Empower workers in human-machine interactions: This involves ensuring that workers are “in the loop” of fully or semi-automated decision-making and that they make the final decision, using input from the machine. This is particularly important when joint (man-machine) problem solving takes place. Increasing worker autonomy means maintaining the accumulated tacit knowledge of the workforce and supporting the transfer of this knowledge to the machine, whether it is a cooperative robot or software. This is particularly relevant for processes that require testing, quality control, or diagnosis.

Enable workers to gain “AI knowledge”: acquiring technical skills and using them “at work”, although necessary, is not enough and above all serves the interests of the employer. Becoming “AI-literate” means being able to critically understand the role of AI and its impact on one’s work and profession, and being able to anticipate how it will transform one’s career and role. Passive use of AI systems does not benefit the workers themselves – some distance must be established for them to see the overall influence of AI. There is room here for a new role for worker representatives to signal digital risks and interactions, to assess the uncertain impact of largely invisible technologies, and to find new ways to effectively integrate tacit knowledge into workflow and processes.

Two scenarios

In the AI ​​settlement negotiations, two possible scenarios have emerged. The first is to add “protective” amendments to the text. This may not be enough, as major fixes are needed to expand its legal scope and make substantial changes to its scope.

The second scenario is to adopt complementary rules on AI for the workplace. These would complement the GDPR and the Commission’s draft directive on improving working conditions on platforms, in particular with regard to algorithmic management.

Like the Dutch toeslagenaffair has shown, algorithms can have a direct and damaging impact on people and the lives of workers. For trust to ever exist, the AI ​​law needs to be refocused: its current focus is on empowering businesses and promoting the EU as a global leader in AI, whereas the priority should be to protect citizens and workers.

Aida Ponce Del Castillo is Senior Researcher at the European Trade Union Institute.