Skip to content

Ethics and Digital Matching

The project aims at raising questions about ethics and sustainability, related to digital matching

Digital Matching: What Could Go Possibly Wrong?

The debate about ethical and unethical matching on the labour market is not new, but it brings new perspectives with the emerge of digital matching algorithms. It is quite important for the Swedish Public Employment Service to be transparent about how these new technologies can be used. This forum aims at discussing developments, potential improvements, and risks in relation to ethical digital matching.

Labour market matching refers to processes in which job seekers and employers collect and evaluate information about each other in order to find a relevant match between a job vacancy and potential candidates. Thus, matching skills and jobs can lead to an efficient hiring process, which benefits both the employee and the employers.

The matching process can be entirely carried out through a data-driven approach, without human intervention, by people without any digital support, or most commonly through a human-machine interaction. It is a common misconception that the more data and algorithms are used, the more the objectivity of the process increases. Our point of departure is that in all stages of these processes can occur actions, which can lead to consequences with unethical characteristics, regardless of if they are carried out by humans or by digital solutions.

Social Bias and Discrimination

Some of the most important data sources for the labour market are job advertisements, resume (CV) data and education course descriptions. Since discrimination occurs in the society, a digital data-driven model, using this kind of data, reinforces the risk of increasing discrimination.

This question has become a hot topic in 2018, when a machine learning-based tool for candidates' evaluation, developed by Amazon, emerged on the market. The tool ranked systematically male applicants higher than female ones.(1) Another study of automated advertisements management tools found out that a gender bias occurred when the tool had been distributing more frequently high-paying job offers to men than to women.(2)

Data-driven Methods Can Put Good Judgment (Phronesis) into Question

Employers appreciate good judgment in relation to their business needs. The good judgment, which they are searching for, is usually dependent on a particular business context, but it is rarely explicitly described in the datasets. Therefore, it is difficult to apply data-driven matching methods, which meet the requirements for a good judgement. Text analysis tools and data-driven models assume that the meaning of words can be derived directly from their linguistic context. However, it they ignore other contexts that are factually relevant.

The Promiscuous Use of Personal Information by Recruiters

Nowadays, the staffing industry collects quantities of information about job seekers. In addition to CVs and cover letters, different types of personality, intelligence and assessment tests are often used. Furthermore, it is a common practice to use information about the individual, which once has been stored on the website during the recruitment process.

The information, which is stored, is partially limited by the GDPR and by the applicant’s consent. However, questions have been raised as to whether this personal data's protection is sufficient, since the job seeker finds himself anyhow in a quite vulnerable situation. By implication, the individual’s consent to the collection of such personal information is required in order to get the job.

Career Skills and Digital Exclusion

Career skills, including knowledge about the labour market and how the recruitment process is carried out, are both crucial for finding a suitable job and getting hired. New digital technologies are transforming the recruitment process; matching algorithms, AI, VR, social media, recruitment systems, etc. are getting a bigger impact on the whole hiring process. This means that knowledge about how these different parts of the hiring process work is vital for the job seeker to be successful in the job search. However, the possibility of getting this relevant knowledge is partly aggravates by following up problems of what is commonly referred to as algorithmic transparency. Today, most of the job search sites are generally lacking open information about how their digital matching methods and tools work.

Unclear Responsibility When Data-driven Models Go Wrong

The question about accountability is closely related to algorithmic transparency, or the so-called algorithmic accountability.(3) A recruitment system, which is using machine learning to gradually adjust who gets hired to a specific job, and what determines who gets hired to a specific job, has rarely a single-point-of-failure. Therefore, when something goes wrong, responsibility is to be distributed among a variety of actors. Who is really taking the responsibility when an algorithm-driven decision leads to discriminatory or unfair consequences? The current legislation is often not in line with the ongoing developments, which is further complicating the requirements for accountability. (4)

Read the report on motivation-driven matchning here (in Swedish).

References

  1. Datta, A., Tschantz, M.C., Datta, A. (2015).
    Automated Experiments on Ad Privacy Settings – A Tale of Opacity, Choice, and Discrimination. Proceedings on Privacy Enhancing Technologies. 1: 92–112, DOI: 10.1515/popets-2015-0007.
  2. Explainable Artificial Intelligence (XAI):
    Concepts, taxonomies, opportunities and challenges toward responsible AI, https://www.sciencedirect.com/science/article/pii/S1566253519308103#fig0005
  3. Sustainable AI.