Peter Cassat, a partner in Culhane Meadows’ Washington, D.C. office, was recently interviewed by SHRM Magazine for an article about how artificial intelligence may be used to make employee compensation determinations. Here are some excerpts from Peter’s interview:
Using AI [for compensation decsisions] is not without challenges and risk. If not managed and monitored appropriately, AI-based compensation tools could start out with ingrained biases or become further biased over time.
“The risk is in the variables in connection with the data in AI,” said Peter Cassat, a partner with law firm Culhane Meadows Haughian & Walsh PLLC in Washington, D.C.
Research on AI outcomes by The Brookings Institution, a Washington, D.C.-based think tank, shows generally that if biased data feed the algorithm, results may be biased. For example, if some employees are being paid less than others despite having the same job, experience and skill levels, simply inputting that data into an AI-based pay system could perpetuate that bias.
“Bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities,” the Brookings report stated. If left unchecked, biased algorithms can perpetuate biases against certain groups of people “even without the programmer’s intention to discriminate.”
Employers should be mindful of how AI tools are functioning and what data they are collecting. “It is important to make sure this does not favor some groups over others based on factors like gender,” Cassat said.
Avoiding these problems begins with due diligence before choosing AI tools. Over time, it is also important to remain alert for any unintended consequences, not only in the recommendations the system outputs but also in how managers use the results.
“Don’t just implement and forget,” Cassat said. “Look at the results and whether and how they differ following implementation.”
Rigorous data governance for AI also is important to ensure AI-supported compensation decisions are fair and unbiased. “AI systems that are not ethically governed can promote exclusion and feel too intrusive—and even threatening—to those impacted by their decisions,” Zarkadakis said.
The complete article can be found here.