There is a rush around the world to explore AI systems in all facets of the workplace.   Being the "first adopter" and getting "ahead of the curve" are phrases commonly heard.  But the ICO has, as part of its focus on AI in 2020 (and into 2021) , released a Paper striking a more cautionary, and thoughtful, note. 

On 18 December 2020 the ICO released its paper on "Six things to consider when using algorithms for employment decisions". The paper is interesting for all data & privacy lawyers, but especially those with an interest in data & privacy in the workplace.

In summary the ICO has one main focus:  it discusses issues around bias and discrimination in AI decision making systems and how to mitigate against these. It looks at how bias and discrimination can be a problem in human decision-making, and how this can flow into AI systems; and then looks at what employers can do to mitigate this risk, e.g. conducting detailed DPIAs which should look at both the data & privacy aspects on any workplace AI system and also the "equalities law" aspect.

This last point is important and is often missed in the rush to be the first adopter of new workplace technologies. Often we can seemingly solve data & privacy issues via a thorough DPIA process (a technical tweak here, a transparency nudge there, ensuring our DPA with any third party complies with Article 28 etc.) but employers should always remember the wider employment law context and that this needs to be taken into account throughout the planning and implementation stages of any project. An AI recruitment tool that is seemingly superbly efficient, and is accompanied by a detailed and compliant transparency notice, backed up by elevated security systems is nothing if the results it might produce are subject to any potential allegation of discrimination and bias.  Not only will the system not work, but the risk to the business of discrimination claims is material. 

The ICO does however finish on a positive note suggesting that "Algorithms and automation can also be used to address the problems of bias and discrimination" and that possible ingrained human biases might be alleviated via the use of "neutral" (or bias-corrected, perhaps) AI decision making systems.