When An Algorithm Doesn't Hire You
Life in the digital world is inevitably mediated and directed by algorithms that process and harvest data fed into them and analyse patterns and correlations. Many of them also rely on artificial intelligence (AI), allowing them to run automatically and learn from their own processes, and to create models for future prediction through machine learning. However, these algorithms raise a number of questions about how biases existing in the base data, or the programs that run the algorithms, can impact real-world actions like seeking information or hiring new employees.
The pitfalls of algorithms relying on AI, and their potential for bias and misinformation, are well known, especially in common digital providers that people interact with every day. As early as 2004, the Google algorithm for ranking pages, which was based on Open Directory Project data about links and page views, had ranked a neo-Nazi website, Jew Watch, as the top result for the search term ‘Jew.’ Even more recently, biases have been discovered in several social media algorithms. Often, politically dissenting content has been censored or hidden by the algorithm, and a narrow definition of aesthetically pleasing content has been used to exclude people based on their race, gender, disability or social class.
However, the impact of biases in AI extend far beyond the space of social media to any human activity that is mediated by AI. In fact, while early musings on the nature of AI, such as Isaac Asimov’s Three Laws of Robotics, spoke of an apprehension that robots might come to dominate humans by transcending our foibles, the reality today is that AI technologies often seem to perpetuate human biases on a much larger scale by automating them.
One field in which this has a massive impact is in that of hiring. Various companies have used automated algorithms in hiring new employees, in order to sort through thousands of applications and to select on criteria such as job compatibility and expertise. However, these quantities and their relative importance are often highly subjective, and attempts to define them have often perpetuated biases that exist in workplaces.
For example, an infamous hiring algorithm, used by retailer Amazon.com, was trained to winnow down applications, using as templates the profiles of top employees of the company, many of whom were male. In this manner, the algorithm taught itself to downgrade candidates whose biodata spoke of having participated in women’s sports or attended women’s colleges, among other words it had learned to ignore.
While this was not intentional on the part of the developers, it stemmed from biased practices on the part of hiring managers or those evaluating employee performance. As the tech entrepreneur Julien Lauret wrote in response to the case, “The algorithms are never sexist. They do what we ask of them, and if we ask them to emulate sexist hiring managers, they do it without any hesitations.”
Some companies using AI tools in the hiring process go even further, claiming for example that they can gauge a person’s compatibility with a job role, from a single video interview, or through neuroscientific tests, both of which could potentially be biased against candidates with disabilities. Several human resources experts, writing in the Harvard Business Review, pointed out that AI in hiring can run a huge gamut, such as “game-based assessments, bots for scraping social media postings, linguistic analysis of candidates’ writing samples, and video-based interviews that utilize algorithms to analyze speech content, tone of voice, emotional states, nonverbal behaviors, and temperamental clues.” Often, these assessments claim to reveal characteristics that the individual under assessment may have wanted to keep private, such as their political views, disability, mental health status, lifestyle, or sexual orientation, and that could play a role in their subsequent chances of being hired.
Beating the algorithm, at the same time, forces candidates to try to conform to the norms that it has set. Whether this involves using action verbs such as ‘executed’ or ‘led’ in a resume that will be read by a bot, or using industry jargon and keywords from a job description, this often leaves very little scope for an individual to get past the first stage of the hiring system with a resume that doesn’t seem fit for a job but skills that do. Potentially, this could lead to a standardisation, forced conformity and possible stagnation of the workforce, rather than bringing in new talent and perspectives.
It should however be noted that, at present, algorithms are largely confined to the initial selection of candidates by analysing resumes and applications. At the same time, this leads to thousands of deserving candidates potentially being shut out for not conforming to the keywords or life experiences that the algorithm has been trained to look for. Moreover, this comes at a time when many countries around the world have a large number of “hidden workers,” a heterogeneous group of individuals who lack precise qualifications for jobs but often have the capabilities and work ethic that could make them a good fit.
A Harvard Business School report on these workers specifically cited automated hiring systems as one factor that was preventing these hidden workers from entering the workforce. The hiring systems were looking for specifics, like college education and past work experience in a similar role, which these hidden workers did not necessarily have, hence excluding them from the hiring process. For example, if while hiring a coder, the algorithm was looking for an individual with a college degree in computers, ideally from specific colleges, this would automatically exclude someone who had not been to a top-ranking college, or to any college at all, but had top-grade coding skills and knowledge. This was a problem that companies noticed too; 88% of those interviewed said that the automated hiring process was weeding out qualified candidates for high-skilled jobs, and 94% saw a similar problem in mid-level jobs.
Thus, the pitfalls of an AI-based hiring system are numerous, as it often carries with it inherent biases that many hiring managers also unconsciously endorse, while automatically funneling out a number of candidates who could be equally qualified if not more so. The idea of an algorithm making decisions about something as crucial as an individual's job, while simultaneously not being equipped to handle the heterogeneity and diversity of potential candidates, speaks to the still-flawed nature of AI in many fields.
Whether it is in terms of social media algorithms, hiring practices or even evaluating the possibility of recidivism in criminal justice systems, AI at this point in its evolution often seems to be perpetuating humans' inherent biases rather than eradicating them. It's important that these fundamental issues in algorithms and in the AI that backs them be addressed before they are implemented in systems that have such tremendous power to impact people's lives.
Comments
Post a Comment