Peter Cassat discusses how AI can impact pay decisions in SHRM Magazine

Peter Cassat, a partner in Culhane Meadows’ Washington, D.C. office, was recently interviewed by SHRM Magazine for an article about how artificial intelligence may be used to make employee compensation determinations. Here are some excerpts from Peter’s interview:

Using AI [for compensation decsisions] is not without challenges and risk. If not managed and monitored appropriately, AI-based compensation tools could start out with ingrained biases or become further biased over time.

“The risk is in the variables in connection with the data in AI,” said Peter Cassat, a partner with law firm Culhane Meadows Haughian & Walsh PLLC in Washington, D.C.

Research on AI outcomes by The Brookings Institution, a Washington, D.C.-based think tank, shows generally that if biased data feed the algorithm, results may be biased. For example, if some employees are being paid less than others despite having the same job, experience and skill levels, simply inputting that data into an AI-based pay system could perpetuate that bias.

“Bias in algorithms can emanate from unrepresentative or incomplete training data or the reliance on flawed information that reflects historical inequalities,” the Brookings report stated. If left unchecked, biased algorithms can perpetuate biases against certain groups of people “even without the programmer’s intention to discriminate.”

Employers should be mindful of how AI tools are functioning and what data they are collecting. “It is important to make sure this does not favor some groups over others based on factors like gender,” Cassat said.

Avoiding these problems begins with due diligence before choosing AI tools. Over time, it is also important to remain alert for any unintended consequences, not only in the recommendations the system outputs but also in how managers use the results.

“Don’t just implement and forget,” Cassat said. “Look at the results and whether and how they differ following implementation.”

Rigorous data governance for AI also is important to ensure AI-supported compensation decisions are fair and unbiased. “AI systems that are not ethically governed can promote exclusion and feel too intrusive—and even threatening—to those impacted by their decisions,” Zarkadakis said.

The complete article can be found here.

*Culhane Meadows is ranked by U.S. News/Best Law Firms in Technology Law, Bankruptcy/Reorganization Law, and Information Technology Law. This website and the communications herein may be considered attorney advertising. Previous results are not a guarantee of future outcome. This website is for informational purposes only and does not constitute legal advice. The information herein is not intended to create an attorney-client or similar relationship. Until you establish such a relationship and receive an engagement letter, you have not hired a Culhane Meadows attorney nor become a client of the firm. Whether you are a new or existing client of the firm, Culhane Meadows must determine that there is no conflict of interest and that it is willing and otherwise able to accept the new engagement before representing you on a new matter. Only if and after Culhane Meadows has informed you it is willing and able to accept your new matter should you send the firm any information or documents that you consider private or confidential. Such information will not be treated as private, confidential or otherwise protected from disclosure until Culhane Meadows has communicated in writing that it is willing and able to accept your new matter and provide you with legal counsel. Whether you need legal services and which lawyer or law firm you select are important decisions that should not be based on this website alone.

Accessibility Toolbar