Brandice Sills-Payne: Outsourcing to AI is no shortcut to solving hiring bias ills

By Brandice Sills-Payne Brandice Sills-Payne has been verified by Muck Rack's editorial team
Published on August 11, 2019

There are those who put in the hard work and those who scrutinize the hard work of others.  

Technologies that apply AI to the field of hiring need both if they are to move forward in an effective and positive way. Human scrutiny is vital for companies that use AI when hiring. These new technologies need a human element critiquing the process if they are to realize their potential to level the hiring landscape

Joy Buolamwini’s ability to identify and call out biases within algorithms is well known. As a coder, she also embodies what we need more of in the field: someone who can develop a technology and still call her peers (and herself) into question over biases. In the arenas of sourcing and screening, there are many technologies available.  Most developers seem to understand that how we build and train machines will determine how they’ll behave.

Yet, having access to diverse training sets of data, Buolamwini notes, can be difficult when developers are writing code. Efficiency and project completion also often take precedence over safeguarding against biases. 

Is this why there is so much biased tech out there in hiring? Can we go back and retrain what’s already been developed? What can we actually do to solve this problem right now as these technologies proliferate?  

How can we fix hiring biases?

Automation, metrics and big data are now the hallmarks of intelligent sourcing and screening. This is changing the way companies find new employees. The automated process should help combat bias in hiring as well as discover the best candidates available. Yet, when it comes to sourcing and AI we’re just not there yet. Building outside scrutiny into the process itself is necessary, even as developers strive to train more perfect machines. 

AI used in hiring should include a human-in-the-loop committed to identifying and removing biases. The technology needs a human component to safeguard against its limitations. This represents an effective hedge against unintended bias built into hiring algorithms.  

AI and beyond

To move beyond this approach, though, hiring algorithms should also answer three basic questions.  

The first question is, generally: Is it reliable? Coders must train AI to consistently produce similar results within the same training set. Then, there is the question of validity. Hiring algorithms must scour the web for candidates and match criteria relevant to what’s needed for the job. 

Even if the technology is meeting both of these criteria, though, it still fails if it discriminates against certain groups.

For example, if a candidate must take a visual test as part of an interview; is there also a test available for the visually impaired?  Joy Buolamwini once built a mirror that worked off facial recognition technology.  Yet, it couldn’t complete her projects unless she wore a white mask. The AI was never trained to recognize faces of African descent

Coding should be inclusive. Many problems can be solved if we simply ask: Who is coding? Why are we coding? And, how are we coding?  Yet, even then it’s sometimes still not enough. 

A human-in-the-loop strategy is key

Companies shouldn’t rely on technology solely because it closes requisitions faster. Making decisions that are misguided or biased benefits no one. Automating certain stages of the hiring process is great (and necessary).  Yet, until the tech can demonstrate reliability, validity and fairness, human scrutiny is the best way forward. This method will improve retention rates, productivity, company culture and a company’s bottom line.

It’s often said: we don’t just need data, we need good data. So, too, do we need unbiased algorithms. Those who create amazing new technologies should welcome scrutiny in the name of  civil liberty. Discrimination is non-negotiable. Humans are always evolving and we need to constantly inventory how we reach our desired goals as we develop. 

HR teams should be selective with the tools they use to source and hire people. They should work with those who are developing new technologies mindfully, automating certain steps of the hiring process while also being honest about the intricacies of human selection. Give a wide berth to those companies selling the idea that a computer has figured out what we, as a society, have not been able to solve over millennia.

By Brandice Sills-Payne Brandice Sills-Payne has been verified by Muck Rack's editorial team

Brandice Sills-Payne is a Columnist at Grit Daily. She is the Director of Experiential Marketing and Content at Fetcher. Based in Oakland, CA and has an extensive background in event production and digital media. She has a passion for bringing awareness to and speaking about LGBTQ, POC, and women’s inclusion and equality in the workplace. She loves fantasy novels and has a Bachelor’s Degree in History and Music from Loyola University New Orleans.

Read more

More GD News