The California Privacy Protection Agency has initiated rulemaking to regulate the use of AI in employment practices, highlighting a trend towards standardised AI governance.

On November 8, 2024, the California Privacy Protection Agency (CPPA) took a significant step in regulating the use of automated decision-making technology (ADMT) in employment practices by voting 4-1 in favour of initiating formal rulemaking. This decision comes as businesses increasingly adopt artificial intelligence (AI) tools for critical functions such as hiring, employee evaluation, and performance management. The proposed regulations are designed to address the complexities and potential risks associated with the employment applications of such technologies.

The draft regulations define ADMT as any technology that processes personal information and utilises computation to execute decisions or significantly aid human decision-making processes. If these regulations come into effect, they would impose extensive requirements on employers who depend on AI systems for making workforce-related decisions. This regulatory framework aligns with existing laws in places such as New York and anticipated legislation in Colorado, showcasing a growing trend towards the standardisation of AI governance in employment.

A key component of the draft regulations is the requirement for Pre-Use Notice. Employers using ADMT in substantial employment decisions must inform all relevant parties—including employees, independent contractors, and job applicants—about their reliance on such technologies. This notification must detail the right to opt-out of ADMT applications, alongside the right to access decisions prior to any personal information processing.

Furthermore, a crucial aspect of these regulations is the imposition of a Bias Review. Employers must perform an audit to assess whether their use of technologies, particularly concerning physical or biological identification methods, fails to discriminate against protected classes. While the draft regulations do not specify whether the audits must be exclusive to each employer, contrasting practices in jurisdictions like New York City could allow for the use of external data in audits under certain conditions.

The regulations also stipulate an Opt-Out provision, granting individuals the right to decline ADMT involvement in certain employment decisions. However, this right may not extend to decisions surrounding hiring, work allocation, or compensation if the employer can demonstrate that their AI systems adhere to established accuracy and non-discrimination protocols.

An annual Cybersecurity Audit is another requirement outlined in the proposed regulations. Employers must ensure their data protection systems are updated and capable of addressing vulnerabilities, utilising an independent auditor to verify their cybersecurity measures.

In addition to these mandates, a comprehensive Risk Assessment is required before processing personal information relevant to significant employment decisions. This assessment must evaluate whether the potential privacy risks surpass the benefits of data processing, with submissions sent to the CPPA. Employers are required to detail the purpose, methods, associated risks, and protections related to the processing of personal information, with vague terms not being permissible as justification.

As the CPPA has published its notice of proposed rulemaking, a public comment period will follow. The agency has requested an extension of the standard 45-day period for comments due to the upcoming holiday season, leading to a submission deadline anticipated for early 2025, although no precise date has been confirmed. Observers of the regulatory landscape will continue to monitor developments on this matter as businesses prepare to navigate these emerging requirements.

Source: Noah Wire Services

More on this

Share.
Leave A Reply

Exit mobile version