A developmental psychologist by training, panelist Adrianne Pettiford understands the implications of an AI that draws from historically biased data.
After spending a large portion of her career focusing on diversity and inclusion from an institutional perspective, Pettiford is now the Head of Client Insights and Analytics at Pymetrics, where she utilizes her background to impact employment equity and workplace diversity through the technology deployed to clients.
“I come to this really from the lens of ‘What are the tools we’re using to assess candidates?’ in terms of selecting them for promotion of other internal moves or hiring,” she said.
“After having spent several years at the EEOC (Equal Employment Opportunity Council) on the regulatory side looking at employers who were, unfortunately, not doing things in the best way, I’m now in a space working for a tech vendor that’s being intentional about designing products and tools with diversity in mind.”
An AI talent solutions company, Pymetrics builds custom algorithms for clients using neuropsychology exercises intended to match candidates to their best-fitting roles, a complex function that’s performed at much higher rate of speed than that of human talent acquisition specialists.
Pettiford noted that, while technology is much more efficient at deploying employee assessments and assessing resumes for recruiting, the capacity to expand the use of these tools at scale means that if bias is present in the technology, then biased evaluations can also be extended at scale.
The bright side?
“We can audit these tools in a way we never could with human decision-makers,” she reported. “We could never open someone’s thought processes and pluck out the biased logic or decision-making – but we can audit our tools and isolate those features and measures that contribute to bias.”
For companies looking to implement AI technology within their HR processes, Annette Tyman, Labor & Employment Partner at Seyfarth Shaw, says auditing those functions for bias is a critical step.
“When we’re talking about applying these assessments and evaluations to scale, from a legal standpoint what I hear is ‘risk’,” Tyman said. “The volume at which AI tools can process information is exponentially larger, and the litigation risk for companies is that much greater.”
Tyman reinforced the heightened risks for companies who are deploying AI at scale without customizing their approach through vendors like Pymetrics.
“I often hear of employers applying a single AI tool for all jobs across their company,” she stated. “What does that mean? How is that technology created? The same tool can find all of your top performers across all jobs? That’s a tall order.”
Referring to scale, Tyman reminds us that if a company-wide AI solution imposes bias and raises legal consequences for an employer, at larger corporations that could mean litigants numbering in the tens of thousands.
“Are you going to be, as a company, able to explain to an investigator or prosecutor the data and information used in the algorithms you employed?” she asked. “Are you going to be able to explain how hiring or promotional decisions were made by the technology? There are a lot of areas of potential concern.
There’s a lot of considerations for, not only diversity, but also other protected groups like individuals with disabilities. There’s also a lot of discussion happening in the civil rights context. Understanding all of these things and having a sense of awareness as an organization is crucial as you move forward.”