The Emergence of Algorithmic Bosses: Framing a Legal Response

September 28, 2020 | Kruthika R

 

Source: Wikimedia Commons; Image only for Representational purpose

 

A few months ago, two Uber drivers from the United Kingdom, Azeem Hanif and Alfie Wellcoat filed a case against Uber alleging discrimination by its algorithm. They brought the case in a District Court in Amsterdam, where Uber’s headquarters is located. One of their central claims is that Uber’s algorithmic interference determines the nature of their rides: which drivers get the short ride or the nice ride, and the other way round. The automated decision-making process lacks transparency and is based on arbitrary factors, they allege, and drivers are in the dark about how the AI decides these questions.  

 

We see increasing instances of human bosses being replaced with algorithmic ones. This Uber episode is yet another example of the emergence of algorithmic control and determinism in workplaces. With this comes a host of legal issues that scholars have attempted to unravel. One may look at Jeremias Prassl’s scholarship as a starting point to unpack the legal issues surrounding this. Prassl argues that a legal response to algorithmic bosses should take into account the following: data, processing and control. 

 

Data 

 

Workplaces increasingly collect digital data from their employees. Information can be collected through an employee’s online activities, phone calls, emails etc. In some circumstances, workplaces adopt sophisticated sensors – as in the case of Uber, which strove to determine driving patterns of its employees using the sensors on their phones.  The obvious concern to address is that of privacy and data protection. European Union’s General Data Protection Regulation (‘GDPR’) might be the starting point in responding to data-related legal issues. Employers must not only establish that its employees have consented to the collection of their digital data, but also provide a legitimate reason for such collection. Additionally, the employer must prove that the data collection is fair, proportionate, and transparent.  

 

Processing 

 

Employers have been using extremely powerful tools that can process and analyse the collected data. Sophisticated Artificial Intelligence and machine learning have made it possible to make sense of a large amount of data in virtually no time.  Again, GDPR’s Article 5 and 35 mandate that processing of data must be achieved by safeguarding sensitive personal data including ‘racial or ethnic origin’, political opinions, religious or philosophical beliefs etc. While these obligations are merely procedural in nature, should there be a substantive ‘right to explanation’?  Should employees be made aware of how algorithms function and the logic that governs it? Legal scholarship is yet to categorically answer this. 

 

 Control 

 

In addition to data collection and processing, workplaces have begun using AI and machine learning software to make decisions regarding terminations and other sanctions: Amazon’s system automatically generates termination slips after a series of warnings, while Uber temporarily deactivates drivers who repeatedly turn down nonlucrative rides. Prassl recommends that when tackling legal issues around this, one must broaden the traditional legal understanding of employment. Relying on a US precedent, Douglas O’Connor v. Uber Technologies Inc., he argues that one must adopt a ‘broader range of instructions and control’ test to bring workplaces under legal and regulatory regimes.  

 

While debates on this issue are in a fairly nascent stage, the outcome in Hanif and Wellcoat’s challenge will be crucial in understanding and streamlining legal response in regulating ‘algorithmic bosses’. Also, is the Indian legal and regulatory regime equipped to address similar issues? We will attempt to answer this in our future posts.  

 

 

Kruthika R

Alumni

View profile