Software that uses artificial intelligence in the employment relationship has become a reality not only in Switzerland but in other well-developed countries such as the United Kingdom, Germany, France, and the United States.
Audio and voice expression technology allows hiring managers to analyze the language, tone, and facial expressions using job candidates’ speech and verbal skills during online interviews streamed on their laptops or mobile devices. The applications’ algorithms try to identify and match thousands of pieces of facial and linguistic information compiled from previous interviews. Tools developed in Switzerland go even further, utilizing AI to essentially read candidates’ minds and greatly expanding what is commonly called “biometrics.”
It is therefore high time to explore in more detail how European data protection laws should address these new workplace technologies. The Swiss Federal Parliament is currently revising legislation enacted in 1992 to conform with European standards.
Consent and Core Content in Data Protection Legislation
The discussion about biometric technology in the employment arena revolves around two key issues, although these issues apply to other business settings, including videoconferencing.
The first issue relates to the extent a job candidate or employee can consent to an employer’s use of biometric technology, and whether such consent is voluntary. The second concern pertains to the so-called core content of data protection, a topic not discussed much in legal literature (with the usual exception of German legal commentators who have raised the question of core contents in the context of fundamental constitutional rights).
Consent in Data Protection Legislation
It is commonly known that consent is not a miracle cure for processing personal data. Consent is always the last option on which employers should rely, particularly because an employee can withdraw consent at any anytime, sometimes with retroactive effect.
European data protection authorities have made it clear that if an employer relies on a worker’s consent for any part of the processing, it must respect that choice and stop the activity in whole or in part if the employee withdraws consent. As such, an employer cannot justify the processing based on an otherwise legitimate interest (even if such legitimate interest existed before consent was requested) once the employee or any other subject withdraws his or her consent.
The GDPR defines the basic requirements for valid legal consent:
“Consent of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.”
Inappropriate pressure or influence that could sway the outcome of the job candidate’s or employee’s choice might render the consent invalid. By definition, the employment relationship is always imbalanced despite the popular business brouhaha about flat hierarchies and agile working. In other words, an employee might always worry that his or her refusal to consent may have severe negative consequences on an employment relationship. Or to put it even differently: the entire concept of voluntary consent in the employment area is highly questionable and fragile at the same time.
Core Content in Data Protection Legislation
Generally, European data protection laws consistently prohibit the processing of “highly personal data,” such as information that reveals racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership. Also generally banned is the processing of genetic and biometric data, health-related data, or data concerning a natural person’s sexual orientation. There are well-known exemptions, including medical diagnosis (and an employee’s ability to work) and preventive or occupational medicine.
It can be said that European data protection laws acknowledge the existence of highly personal employee data that should not be collected at all or that can be collected and processed only by putting specific safeguards in place. So far, however, the data that so-called proprietary machine learning systems can process is unclear.
The U.S. company HireVue is by far the market leader in this widely uncharted business territory. To protect its trade secrets, HireVue offers only limited access to its workplace interviewing algorithms. It has given only vague explanations for which words, facial and body expressions, and behaviors provide the best results, but has declined for the time being to make the system available for an independent audit.
It can only be assumed that the data sets HireVue’s proprietary machine learning systems collect and process go way beyond what European data legislation defines as “highly personal data.”
Some Preliminary Thoughts and Conclusions
Considering all of the above, one might conclude that biometric data sampled by facial and voice analysis technologies and emerging mind-reading algorithms is not covered by current European and Swiss data protection legislation. This means there might be any lack of legal basis to retrieve personal data using these methods. When worse comes to worst, the “miracle cure” concept of voluntary consent in the employment arena might not help either.
As in most (but not all) segments of life, more transparency could be the key to understanding the use of proprietary machine learning systems in the employment context.
Conversely, in all fairness, proprietary machine learning systems also hold promise for a better workplace in the future. HireVue and other service providers rightfully claim that an employer using their products can review far more job candidates than ever before. A broader pool of potential job candidates also enhances the opportunity for employers to select job applicants who might not have been identified by humans in a corporate human resources department.
It is also widely accepted that proprietary machine learning systems are often more knowledgeable, more attentive to detail, and more neutral than a hiring manager, especially after a long day of job interviews. We must also recognize that humans can formulate biases, unintended or not, into data processing algorithms. However, one may also see that the potential bias created by algorithms sometimes replaces human bias as it exists anyway.
Dr. Thomas Rihm
Geschäftsführender Partner / Managing Partner