Jobs by JobLookup

For fair hiring in the AI age, regulate AI vendors, not just employers

 


The lesson of New York City’s requirement for bias audits of employment algorithms is that policymakers must regulate AI vendors.  

A landmark New York City law, known as NYC 144, requires employers to conduct bias audits every year for the automated employment tools they use. I testified in favor of the earlier version that put the burden on the vendor of the employment tool. As I saw it, public disclosure of any disparate impact of an automated employment tool would be valuable information for the employers that were the potential customers of the tool.   

However, the law was changed to put the burden on the employer, not the vendor. I’m not sure why. Perhaps because it wasn’t clear that the city had jurisdiction over vendors, some of which operated in many states. Perhaps because it wasn’t clear that the city had jurisdiction over vendors that operated in many states. I don’t know.   

In any case, putting the burden on the employers alone was a mistake. The law went into effect six months ago, and a study from the public interest group Data and Society seems to show that while many companies are contracting for the bias audits, they are largely not making them available to the public.  

Apparently, the publicity requirement applies only if the tool is actually used to make employment decisions. Those familiar with the adverse impact requirement of credit reports for employment decisions under the Fair Credit Reporting Act will recognize the problem: The employer is the only one who can tell if the decision-making tool was actually used to make decisions.  

Employers in New York City are making routine internal judgments that they are exempt from the audit and disclosure requirement. They then wait for regulators or aggrieved applicants or employees to bring an enforcement case. New York City enforces the law on a complaint basis and has taken no enforcement actions yet. As a result, there’s a significant chance that an algorithmic bias law that was heralded as the model for other states and localities will become a dead letter — on the books but ignored.  

There might be lots of different ways forward, some of which were described by Jacob Metcalf, the author of the study of the NYC law noted above, in a commentary for the Hill in December. But two things jump out at me. The first is to put the burden on the vendor to conduct and publicly disclose disparate impacts of its employment tools. The second is to do this on a national basis so that vendors can’t evade local laws by refusing to sell there and any issues about legal authority to regulate interstate commerce are removed.  

The big advantage of requiring vendor audits is that they put information in the hands of employers, who can then choose the employment tool that best matches their employment needs while taking into account the legal risk of violating the employment discrimination rules.  

If one tool typically recommends hiring 10 white candidates for every 100 that apply while recommending only two Black candidates for every hundred that apply, that’s useful information for the employing firm. It can then be noticed that another tool recommends eight Black candidates for every hundred who apply and employers can see that using the other tool would put the firm in better shape to comply with the Equal Employment Opportunity Commission’s 80 percent rule of thumb.    

A requirement to conduct and disclose bias audits does not and should not have any standard for illegal disparate impact. That’s the purview of the underlying discrimination law. If the employing firm thinks it has a good business reason for using the tool that picks only two out of every hundred Black applicants, it is free to use that tool.  But at least the firm knows the legal risk it is running when it does that.  

This disclosure requirement puts market pressure on vendors to produce employment tools that avoid disparate impacts to the greatest extent possible, rather than putting a burden on every employer to conduct and publicize its own use of automated employment tools. Employers will understandably seek every legal means to avoid being shamed in that way. It would be much more effective to harness market incentives to move vendors to produce fair employment tools. 

Congress can take a lesson for AI law and regulation from this example. Agencies with the current responsibility to make sure that users of AI comply with the law, including the EEOC for employment and financial regulators for credit scores, have limited authority to impose disclosure or testing requirements on AI vendors. 

Last year Alex Engler, my former colleague at Brookings, now at the White House, urged Congress to pass a comprehensive bill to upgrade the authority of existing agencies to deal with AI issues within their jurisdiction. One element of this would be giving authority to these agencies to impose audit and disclosure rules on AI vendors.  

As it looks for opportunities to regulate the fast-moving field of artificial intelligence Congress should grab some low-hanging fruit and process legislation to mandate bias audits and disclosures for AI vendors. 

Mark MacCarthy is the author of “Regulating Digital Industries” (Brookings, 2023), an adjunct professor at Georgetown University’s Communication, Culture & Technology Program, a nonresident senior fellow at the Institute for Technology Law and Policy at Georgetown Law and a nonresident senior fellow at the Brookings Institution. 


Post a Comment

Previous Post Next Post