Uber Eats delivery driver’s battle with AI discrimination highlights challenges in achieving justice under UK legislation

by

in

1. Uber Eats courier Pa Edrissa Manjang, who is Black, received a payout from Uber after facing racially discriminatory facial recognition checks that prevented him from accessing the app.
2. The case raises concerns about the use of AI systems and how UK law deals with potential biases, lack of transparency, and difficulties in achieving redress for those affected.
3. Despite settling with Manjang, Uber denies fault in their facial recognition system and the case highlights gaps in enforcement of data protection laws by the Information Commissioner’s Office in the UK.

Uber Eats courier Pa Edrissa Manjang, a Black man, received a payout from Uber after experiencing racially discriminatory facial recognition checks that prevented him from accessing the app. The incident raises concerns about UK law’s ability to handle the increasing use of AI systems, particularly with rushed-to-market automated systems that may cause individual harms.

Uber’s facial recognition system, based on Microsoft’s technology, involved submitting live selfies to verify user identity. Despite this, Manjang’s account was suspended and terminated after failing ID checks. Legal claims were filed against Uber with support from the EHRC and ADCU, leading to years of litigation before a settlement was reached.

The settlement with Manjang raised questions about the effectiveness of UK laws governing AI usage. Manjang’s case was based on a discrimination claim under the Equality Act of 2006, highlighting the challenges presented by AI in the workforce. Baroness Kishwer Falkner criticized the lack of transparency and accountability in Uber’s processes that led to the discrimination.

UK data protection laws were also relevant in Manjang’s case, allowing him to obtain evidence of failed ID checks via data access rights. However, enforcement gaps, such as the lack of intervention by the ICO, have undermined legal protections for individuals affected by biased AI systems. The UK government’s decision not to introduce dedicated AI safety legislation further complicates the regulatory landscape for AI use.

Source link