Introducing the Algorithmic Boss

All managerial functions are now entrusted to algorithm-powered tools, creating high expectations and new risks, and no economic sector is immune to the adoption of such systems. How can this AI-driven management be regulated? The EU institutions are engaged in a promising process that may lead to new regulatory solutions, but social dialogue and collective bargaining will remain essential.

AI, Algorithms as boss

Facial recognition and real-time scheduling, smart badges and QR codes, GPS tracking and wristbands, job applicants’ assessment and health self-reporting have all ballooned in the last year. The labor market is experiencing a bewildering dystopia and, to a certain extent, we are docilely enjoying it. Given the potential to open up a myriad of new opportunities, authentic digital innovation is to be welcomed, especially when it facilitates discretion, promotes inclusiveness, and alleviates hurdles (during the pandemic, it has enabled several offsite activities despite the awful circumstances). However, when intrusive forms of workplace governance are given too much leeway, they may undermine human agency, erode labor rights, and thwart productivity. Indeed, if left ungoverned, evidence-based management is likely to perpetuate bias, disseminate unfair treatment, and exacerbate inequality, thus paving the way to social unrest. Reassuringly, there are several instruments that can be mobilized to counterbalance this unrestrained techno-managerial dominance. While regulation can play a crucial role in taming algorithmic bosses, collective negotiation, digital literacy, and workers’ empowerment must be strengthened in order to unleash a sustainable and prosperous world of work, rather than indulging data-centric obsessions.

A long-lasting process of datafication of working relationships – combined with the cheap availability of tracking systems, the dizzying blurring between professional and private lives, and enhanced reliance on ubiquitous digital devices – results in an enticing opportunity to redesign power dynamics at the workplace and thus aggravates existing information asymmetries. AI-driven and more mundane software are now widely used to complement the role of managers and supervisors in all their tasks, from onboarding to promoting, from monitoring to firing. Often marketed as unbiased, fraud-less, and objective, the algorithms that fuel these practices are in fact abstract, formalized instructions to conduct a computational procedure aimed at achieving a result by increasing efficiency and enhancing performances.

Game-changing technologies reflect business needs and specific preferences and, on many occasions, have proven to be far from perfect as they replicate and reinforce human stereotypes or measure pointless parameters. What is worse, given their obscure nature, these models end up limiting the understanding of employers’ strategies, jeopardizing contestation. They also leave workers in the dark, thereby freezing collective action. This leads to an aggravation of societal inequalities. However, in theory at least, automated decision-making (ADM) processes shall be interrogated to expose and curb the flaws that are embedded in human agents thanks to the wealth of multi-source granular data.

Platform work was only the appetizer. Algorithmic HR management is the icing on the cake.

Besides permeating all aspects of society, technology is significantly rewiring workplaces and reshaping labor processes. The Covid-19 crisis has further accelerated a trend towards the digital transformation of managerial functions. Homeworking arrangements increased in an effort to limit the risk of infection while essential workers were provided with anti-virus wearables and followed new health protocols, scattered teams resorted to cloud-based collaborative platforms for project administration, new hire interviews and networking events migrated online due to travel restrictions, academic centers began panic buying proctoring software.

Advanced technologies are not making humans redundant. They are making workers obedient and managers superfluous.

In the last decade, nowhere has this trend been more evident than in the several industries that are lumped together under the “gig-economy” (last-mile logistics, transportation, food delivery, cleaning, accommodation and catering, remote freelancing). Platform workers have witnessed a large-scale experimentation of rating, task allocation, incentivization, customer reviews, and gamification – practices which have now spiraled beyond the growing boundaries of this sector. In countries such as Spain, France, The Netherlands, UK, and Italy, courts, inspectorates, and legislatures are effectively closing loopholes in enforcement after years of perilous doublespeak and uncertain litigation. The drawn-out fight on the appropriate legal classification of riders and couriers will probably end soon. Going forward, the wildest inventions tested in this arena are here to stay, posing significant new challenges of varying degrees.

In the past, forms of all-encompassing surveillance were used to make classifications and get a sense of workflow bottlenecks or deviant conducts (essentially, the use of data was eminently descriptive). Nowadays, a deeper dependence on inferential analytics, favored by machine learning, helps managers to detect patterns and generate predictions about team dynamics, future behaviors, and career prospects. At the same time, the overwhelming system of tacit penalty and reward is also expected to force compliance, thus subtly reconfiguring interactions. Therefore, workers’ choice is severely constrained by prescriptive AI tools.

Katherine C. Kellogg and her co-authors argue that new models of algorithmic management are more “comprehensive, instantaneous, iterative and opaque” than before. From a purely labor law perspective, this causes a genetic mutation of the traditional authority conferred on employers, which is not matched by an equivalent activation of classical counterweights. Addressing this widespread augmentation in contractually unbalanced situations and beyond the temporal and spatial boundaries of the workplace is vital for two main reasons. First, to escape a process of commodification of working relationships and dilution of corporate obligations. Second, to ensure that mutual trust flourishes and thus avoids a loss of competitiveness and engagement, if not even surreptitious resistance and open dissent from workers.

Advanced technologies are not making humans redundant. They are making workers obedient and managers superfluous. This presents an urgent problem that must be tackled. The existing limits to the expansion of managerial powers were conceived when the potential of new techniques was admittedly unthinkable, at a time when supervision was exercised in a direct and physical way by humans.

We are now witnessing an attempt to track sentiments and predict mood changes, while bosses are able to create highly insidious “data portraits.” As a result, social institutions are under pressure in a phase of transition. This profound sophistication should encourage us to rediscover the prominent principles on which labor regulation is premised: human dignity at work above all. Regulators should prompt an open, wide-ranging discussion on the social convenience of algorithms at work. As claimed by the OECD, “collective bargaining, when it is based on mutual trust between social partners, can provide a means to reach balanced and tailored solutions […] to emerging issues, and complement public policies in skills needs anticipation, or support to displaced workers” in a flexible and pragmatic manner. Conversely, perpetuating the techno-determinist narrative risks downplaying the much-needed collective scrutiny and bottom-up negotiation on ground-breaking innovation. To put it bluntly, answering to the drawbacks with a mere recalibration of faulty architecture is a weak renunciation.

Since AI and algorithms are substituting for bosses in a variety of functions, we need a wide-ranging set of initiatives to regulate, if not ban, solely automated decision-making.

Without anti-discrimination and OSH tools, privacy law is a blunt weapon.

There is a great deal of excitement – and also anxiety – about management by algorithms. Adoption, albeit patchy, is gaining momentum – though institutions do seem aware of the ramifications of the expansion of managerial prerogatives in warehouses, offices, and homes. Some shifts are seemingly underway. Not only has the European Commission’s consultation on the legislative initiative on platform work tackled this issue, but also the Action Plan on the Pillar of Social Rights – a flagship political initiative of EU institutions – is tasked with untangling digitized management in order to reap their benefits while addressing its harmful consequences (the proposed aims are to “improve trust in AI-powered systems, promote their uptake and protect fundamental rights”). Although it creates a strongly consistent framework, the General Data Protection Regulation could have a limited scope, and has been designed to encourage data flow. Its key mandatory requirement in workplace relationships is to implement the least right-intrusive option available. Moreover, Article 8 of the Council of Europe’s Modernised Convention 108 can offer a more human rights-based shield against pervasive control. However, current privacy standards may fall short of providing meaningful protection, if narrowly interpreted and applied. Since AI and algorithms are substituting for bosses in a variety of functions, we need a wide-ranging set of initiatives to regulate, if not ban, solely automated decision-making.

Thanks to a pioneering negotiation, Spain will be among the first countries to introduce workers’ right to be informed about the logic of algorithms, which produce decisions concerning working conditions, access, and retention of work, including profiling. The ground-breaking rule will apply to everyone working through a platform. Now that the public opinion has realized the perils of an unbalanced dependence on algocratic models, workers are gaining access to the internal metrics defined by “black boxes” and are improving the legibility of final decisions. According to a solid legal tradition, a consultation phase with workers’ representatives is needed before introducing monitoring devices in several jurisdictions, transparency being the guiding principle. Unions and businesses are now engaging in “social digilogue” on new subjects of bargaining, such as technology-coded human resources policies and extensive electronic performance monitoring in the workplace, as well as their enormous repercussions for occupational health and safety and data protection. Meanwhile, domestic Data Protection Authorities have proven vigilant and proactive, especially when technology was naively depicted as panacea. The same goes for the European Data Protection Board (EDPB), whose approach is praiseworthy, as it advocates reinforced levels of meaningful consent in situations that are inherently unbalanced.

To improve working conditions, policy makers and social partners must revive the importance of anti-discrimination and occupational safety and health (OSH) instruments, and the relevance of challenging domination at work. Regrettably, the ample set of principles informing data collection may very well be powerless in the face of the repurposing of information for less benign ends, especially in cases when pre-built or bespoke digital systems are rented from third parties. The introduction of modern technology assuming executive powers ought to be collectively regulated. Solutions must be systemic and encompass complementary tools selected according to the final use of algorithms at work. If they are meant to set the pace of work, streamline workloads or reconfigure duties, OSH regulation comes into play to address physical and psychosocial risks. When choices are made about the competitive attribution of entitlements, such as hiring, rewards, and promotions, a modern understanding of antidiscrimination provisions is essential to avert prejudiced outcomes for women, younger and less-educated workers, minorities, and vulnerable groups. Besides data protection law, a more effective enforcement of these often-neglected instruments has great potential in the new world of work. Whatever the future of work holds, adopting a human-in-command approach is pivotal.

 

© IE Insights.

 

This article is written in the framework of the “Boss Ex Machina” project, which has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under the Marie Skłodowska-Curie grant agreement No 893888. A shorter version of this article appeared in the “Progressive Post”. The authors have recently published “Il tuo capo è un algoritmo. Contro il lavoro disumano (Laterza, 2020).

WOULD YOU LIKE TO RECEIVE IE INSIGHTS?

Sign up for our Newsletter

Newsletter Subscription