If you use AI that monitors those who sweat in an interview, fill in a loan application in lower case letters or scroll through online applications forms too quickly, or in fact make any form of automated decision, then read on! 

We recently wrote about a Hungarian case about a bank using AI to analyse phone calls with customers to ascertain their emotional state. Automated decisions were then made to spot dissatisfied customers, rank the calls and provide a priority list of customers to be contacted. The Hungarian data protection authority fined the bank the equivalent of €700,000.  It also required the bank to stop the data processing unless it could prove that it had a) appropriately scoped the data to be processed, b) put in place a valid data protection impact assessment and c) identified a lawful processing basis.

Hot on the heels of that decision, an Oxford academic has written a paper arguing that AI creates unintuitive and unconventional groups to make life-changing decisions, but that current laws do not protect members of online algorithmic groups from AI-generated unfair outcomes.  The paper argues that the public is increasingly the unwitting subject of new forms of discrimination, outside of the current protected categories such as sex or race, due to the growing use of AI. 

The paper quotes an example of using a certain type of web browser that can result in a job applicant being less successful when applying online. Candidates in online interviews may be assessed by facial recognition software that tracks facial expressions, eye movement, respiration and/or sweat.  (We hear that if you’ve done an online speed awareness course, you might know how similar ‘attention focus’ technology is used!)

The paper claims there is an urgent need to amend current laws to protect the public from this emerging discrimination and that AI is creating new digital “algorithmic” groups in society, whose members are at risk of being discriminated. These individuals should be protected by re-interpreting existing non-discrimination law.

AI-related discrimination can occur in everyday activities with individuals being unaware. In addition to job applications, other scenarios include applying for a loan where an applicant is more likely to be rejected if they use only lower-case letters when completing their digital application – or if they scroll too quickly through the application pages.

The paper highlights that these new forms of discrimination often do not fit into the traditional norms of what is currently considered discrimination and prejudice. AI challenges our assumptions about legal discrimination. AI identifies and categorises individuals based on criteria that are not currently protected under the law.  Familiar categories such as race, gender, sexual orientation and disability are replaced by groups like dog owners, video gamers, Safari users or “fast scrollers” when AI makes hiring, loan, or insurance decisions.

The EU has proposed its new AI Act, which is currently passing through the EU legislative process and due to be voted on by the European parliament in September.  The UK government is also considering the issues as part of its AI strategy and the upcoming Data Reform Bill.  Back in 2020, the Centre for Data Innovation and Ethics issued a report on bias and AI, although it did not cover certain practices, such as the use of facial recognition.  Now, we expect the sheer weight of reports and concern being expressed to lead to governments and regulators taking these issues more seriously.

We sense the landscape is changing fast, so we’d encourage technology and data processing strategy and adoption to accommodate the regulatory direction of travel, which means involving all of the right internal stakeholders early on. 

Do get in touch if you would like help in ascertaining if your existing (or future) AI system design or deployment is not going to make you sweat….