Earlier this month This was stated by the chairman of the US Federal Trade Commission (FTC) Lina Khan. composition V New York Times reaffirming the agency’s commitment to AI regulation. But there was one AI application Khan didn’t mention that the FTC urgently needed to regulate: automated recruitment systems. They range in sophistication from tools that simply analyze resumes and rank them, to systems that green-light candidates and throw out applicants deemed unsuitable. Increasingly, working Americans are required to use them if they want to get a job.
In my recent book quantitative workerI argue that the number of American workers is declining due to artificial intelligence technologies in the workplace, primarily automated recruitment systems. These systems reduce the candidate to a point or rank, often ignoring the gestalt of their human experience. Sometimes they even sort people by race, age, and gender, a practice that is illegal to participate in the hiring decision process.
Surprisingly, many of these systems positioned as impartial or a guaranteed reduction in the likelihood of discriminatory recruitment. But because they are so loosely regulated, such systems have been shown to deny equal employment opportunity based on protected categories such as race, age, gender, and disability. For example, in December 2022 South Meta Truckers Union, arguing that Facebook “selectively shows job ads based on the gender and age of users, with older workers seeing ads much less often and women seeing job ads much less often, especially in industries that have historically excluded women.” This is deceptive. Moreover, it is unfair to both job seekers and employers. Employers buy automated recruitment systems to reduce their liability for employment discrimination, and the providers of these systems are required by law to substantiate their claims of efficiency and fairness.
The law places automated recruitment systems under the purview of the FTC, but the agency has yet to release specific instructions on how vendors of these systems should advertise their products. Start with a requirement audit to ensure that automated recruitment platforms deliver on the promises they make to employers. Providers of these platforms should be required to provide clear audit records demonstrating that their systems reduce bias in hiring decisions as advertised. These checks should show that developers followed the recommendations of the Equal Employment Opportunity Commission (EEOC) when creating platforms.
In addition, in collaboration with the EEOC, the FTC could establish a Fair Automated Hiring Mark, which would be used to certify that automated recruitment systems have passed a rigorous audit process. As an authorization, the mark would be a useful signal of quality to consumers, both applicants and employers.
The FTC should also allow job seekers who are consumers of AI-enabled online application systems to sue under the Federal Credit Reporting Act (FCRA). It was previously thought that the FCRA only applied to the Big Three lending agencies, but a careful reading reveals that this law could apply whenever a report was created for any “economic decision”. Under this definition, candidate profiles generated by online automated recruitment platforms are “consumer reports”, meaning that the organizations that created them (such as online recruitment platforms) will be considered credit reporting agencies. Under the FCRA, anyone who is the subject of one of these reports can contact the agency that made it to see the results and request a correction or amendment. Most consumers are not aware that they have these rights. The FTC should launch an education campaign to inform applicants of these rights so they can exercise them.