Keeping Up with the Evolution of AI for HR Governance: A Detailed Q&A Recap

In the rapidly evolving landscape of AI in HR, understanding and governing its use is more crucial than ever.

Meghana Machiraju

Ready to see what HiredScore can do for your team?

Request a Demo

In the rapidly evolving landscape of AI in HR, understanding and governing its use is more crucial than ever. HiredScore recently hosted a pivotal webinar featuring Keith Sonderling, Commissioner of the EEOC, Chandler Morse, VP of Corporate Affairs at Workday, and Athena Karp, Founder and CEO of HiredScore. They shed light on the changing terrain of AI governance in HR, discussing regulatory updates and future trends.

Here are the key questions that were discussed during the webinar: 

1. How would you describe the current state of the regulatory environment in terms of technology and AI, both locally and internationally?

“The regulatory environment is complex and evolving. We conducted a comprehensive survey with about 1,300 business leaders and 4,000 employees worldwide, which revealed a significant trust gap in technology. This gap is especially evident in the regulatory landscape of AI. There's a contrast between corporate leaders' excitement and willingness to invest in technology and employees' concerns about its implementation and regulation. Notably, three out of four employees felt their companies were not engaging adequately in AI regulation. This highlights where we are in the maturity cycle of understanding and addressing AI regulation.” - Chandler Morse

2. What is the confusion and chaos related to AI regulations, governance, and laws?

“There's a significant amount of confusion and chaos in the realm of AI regulations and governance. In my role in the executive branch, we deal with long-standing laws, such as those from the EEOC established in the 1960s. It's essential to remember the fundamental purpose of these AI tools in HR and employment decisions. These tools range from recruitment to the entire employee lifecycle. Each organization needs to define the specific use of AI in their context and then consider the long-standing laws and guidelines that apply to those employment decisions. Regardless of the AI application, whether in hiring, job descriptions, or compensation, the key is to understand that these tools are making decisions that are already regulated under laws like Title VII, which covers discrimination based on sex, race, national origin, religion, age, and disability. The challenge is to see through the noise and focus on the core employment decision being made by these tools.”  - Keith Sonderling

3. What actions can be taken on the front end to prevent bias in employment decisions using technology?

To prevent bias in employment decisions using technology, it's essential to focus on transparency, traceability, and consent in the technology's application. Organizations should ensure that AI governance is integrated from the start, creating a comprehensive approach to using technology lawfully and ethically. 

4. How can employment decision-making be improved with technology?

Employment decision-making, enhanced by technology, makes the process more transparent and auditable. This improvement not only helps organizations comply with technology laws ethically but also supports the mission of preventing discrimination. By ensuring that pre-existing laws are applied to technology in the same way they apply to human decisions, organizations can create a more accountable and fair employment process.

5. How does the legal framework apply to technology in making employment decisions?

The legal framework that applies to human decisions also applies to technology-driven decisions. This means that if it's illegal to discriminate with traditional methods, it's equally illegal to do so with technology. However, there's a gap in ensuring trust and accountability in how these technologies are implemented and governed. Bridging this gap is essential for maintaining lawful and ethical standards in employment decisions.

6. What are the challenges for HR in governing AI use?

HR departments are uniquely positioned to take a leading role in AI governance within organizations. They already deal with critical aspects of workplace management such as civil rights, employment practices, and internal policies, which are directly impacted by AI implementation. The challenge for HR is to adapt existing policies and practices to encompass AI-related issues. This includes raising awareness among employees and managers about how these policies apply to AI and ensuring that AI applications in HR respect and uphold the organization's values and legal obligations.

7. In light of the challenges posed by AI in HR and other areas, what are some practical steps organizations can take to ensure their AI governance is effective and aligns with their broader ethical commitments?

  • Start by setting a clear ethical tone at the top, with strong statements from leadership about the company's stance on critical social issues.
  • Invest in educating the workforce about AI and its implications, adapting existing policies to include AI-specific considerations, and ensuring transparency in AI applications. 
  • Draw on external resources, such as AI ethics guidelines from major tech companies, to inform your governance strategies.

8. What is happening in California regarding AB 331?

California is leading the way with AB 331, a bill that introduces a risk-based approach to AI governance in the workplace, delineating responsibilities for both developers and employers. It emphasizes impact assessments, a tool from the privacy context. This legislation is setting a trend, as similar bills have been introduced in Washington, Oklahoma, and New York. 

9. What can be expected in the future regarding AI legislation?

“In the next 12 to 24 months, significant developments are expected in AI legislation. While there might not be 50 different laws for every state, major cities, and federal requirements, a few model laws and frameworks will likely emerge. These will provide clarity and a standardized approach to AI governance in the workplace. Stakeholders should pay close attention during this period to stay ahead of the regulatory curve.” - Athena Karp

10. What common themes are emerging in state laws regarding employment and AI?

Keith mentioned that a common theme in state proposals, such as those in California and New York, is the requirement for companies to conduct audits on certain classes, similar to developments in the EU regarding employment in AI. These audits may include requirements for disclosure, consent of applicants, pre-deployment testing, and potentially yearly testing.

11. How does the variance in federal, state, and local guidelines affect companies?

The variation leads to complexity and increased operational costs for companies hiring in multiple states. However, by designing a flexible and adaptable audit framework, companies can address these varying requirements effectively.

12. What role does voluntary action play in responsible AI development?

Chandler emphasized that voluntary action is crucial in responsible AI development, with companies like Workday leading the conversation. These actions include getting leadership buy-in, forming cross-functional teams, creating guidelines, and building risk assessments. This proactive approach is gaining traction even before regulations are fully formed.

HiredScore's framework for AI governance in HR includes three pillars: Notice, Consent, and Audit. This framework is our commitment to ethical, transparent, and effective AI use, guiding HR towards a future where innovation aligns with responsibility and inclusivity. To learn more about HiredScore’s framework and the responsible and trustworthy use of AI in HR, watch the full webinar embedded above.

Continue Reading
How General Motors Delivers the Future of Recruiting with Workday + HiredScore
Webinar details
2024 Predictions
Image of a ball on a table with the text "2024" inside it
AdventHealth's HR Orchestration Journey: Webinar Recap and Insights
Screenshot of webinar speakers

Keeping Up with the Evolution of AI for HR Governance: A Detailed Q&A Recap

By Meghana Machiraju
Ready to see what HiredScore can do for you?
Request a demo

In the rapidly evolving landscape of AI in HR, understanding and governing its use is more crucial than ever. HiredScore recently hosted a pivotal webinar featuring Keith Sonderling, Commissioner of the EEOC, Chandler Morse, VP of Corporate Affairs at Workday, and Athena Karp, Founder and CEO of HiredScore. They shed light on the changing terrain of AI governance in HR, discussing regulatory updates and future trends.

Here are the key questions that were discussed during the webinar: 

1. How would you describe the current state of the regulatory environment in terms of technology and AI, both locally and internationally?

“The regulatory environment is complex and evolving. We conducted a comprehensive survey with about 1,300 business leaders and 4,000 employees worldwide, which revealed a significant trust gap in technology. This gap is especially evident in the regulatory landscape of AI. There's a contrast between corporate leaders' excitement and willingness to invest in technology and employees' concerns about its implementation and regulation. Notably, three out of four employees felt their companies were not engaging adequately in AI regulation. This highlights where we are in the maturity cycle of understanding and addressing AI regulation.” - Chandler Morse

2. What is the confusion and chaos related to AI regulations, governance, and laws?

“There's a significant amount of confusion and chaos in the realm of AI regulations and governance. In my role in the executive branch, we deal with long-standing laws, such as those from the EEOC established in the 1960s. It's essential to remember the fundamental purpose of these AI tools in HR and employment decisions. These tools range from recruitment to the entire employee lifecycle. Each organization needs to define the specific use of AI in their context and then consider the long-standing laws and guidelines that apply to those employment decisions. Regardless of the AI application, whether in hiring, job descriptions, or compensation, the key is to understand that these tools are making decisions that are already regulated under laws like Title VII, which covers discrimination based on sex, race, national origin, religion, age, and disability. The challenge is to see through the noise and focus on the core employment decision being made by these tools.”  - Keith Sonderling

3. What actions can be taken on the front end to prevent bias in employment decisions using technology?

To prevent bias in employment decisions using technology, it's essential to focus on transparency, traceability, and consent in the technology's application. Organizations should ensure that AI governance is integrated from the start, creating a comprehensive approach to using technology lawfully and ethically. 

4. How can employment decision-making be improved with technology?

Employment decision-making, enhanced by technology, makes the process more transparent and auditable. This improvement not only helps organizations comply with technology laws ethically but also supports the mission of preventing discrimination. By ensuring that pre-existing laws are applied to technology in the same way they apply to human decisions, organizations can create a more accountable and fair employment process.

5. How does the legal framework apply to technology in making employment decisions?

The legal framework that applies to human decisions also applies to technology-driven decisions. This means that if it's illegal to discriminate with traditional methods, it's equally illegal to do so with technology. However, there's a gap in ensuring trust and accountability in how these technologies are implemented and governed. Bridging this gap is essential for maintaining lawful and ethical standards in employment decisions.

6. What are the challenges for HR in governing AI use?

HR departments are uniquely positioned to take a leading role in AI governance within organizations. They already deal with critical aspects of workplace management such as civil rights, employment practices, and internal policies, which are directly impacted by AI implementation. The challenge for HR is to adapt existing policies and practices to encompass AI-related issues. This includes raising awareness among employees and managers about how these policies apply to AI and ensuring that AI applications in HR respect and uphold the organization's values and legal obligations.

7. In light of the challenges posed by AI in HR and other areas, what are some practical steps organizations can take to ensure their AI governance is effective and aligns with their broader ethical commitments?

  • Start by setting a clear ethical tone at the top, with strong statements from leadership about the company's stance on critical social issues.
  • Invest in educating the workforce about AI and its implications, adapting existing policies to include AI-specific considerations, and ensuring transparency in AI applications. 
  • Draw on external resources, such as AI ethics guidelines from major tech companies, to inform your governance strategies.

8. What is happening in California regarding AB 331?

California is leading the way with AB 331, a bill that introduces a risk-based approach to AI governance in the workplace, delineating responsibilities for both developers and employers. It emphasizes impact assessments, a tool from the privacy context. This legislation is setting a trend, as similar bills have been introduced in Washington, Oklahoma, and New York. 

9. What can be expected in the future regarding AI legislation?

“In the next 12 to 24 months, significant developments are expected in AI legislation. While there might not be 50 different laws for every state, major cities, and federal requirements, a few model laws and frameworks will likely emerge. These will provide clarity and a standardized approach to AI governance in the workplace. Stakeholders should pay close attention during this period to stay ahead of the regulatory curve.” - Athena Karp

10. What common themes are emerging in state laws regarding employment and AI?

Keith mentioned that a common theme in state proposals, such as those in California and New York, is the requirement for companies to conduct audits on certain classes, similar to developments in the EU regarding employment in AI. These audits may include requirements for disclosure, consent of applicants, pre-deployment testing, and potentially yearly testing.

11. How does the variance in federal, state, and local guidelines affect companies?

The variation leads to complexity and increased operational costs for companies hiring in multiple states. However, by designing a flexible and adaptable audit framework, companies can address these varying requirements effectively.

12. What role does voluntary action play in responsible AI development?

Chandler emphasized that voluntary action is crucial in responsible AI development, with companies like Workday leading the conversation. These actions include getting leadership buy-in, forming cross-functional teams, creating guidelines, and building risk assessments. This proactive approach is gaining traction even before regulations are fully formed.

HiredScore's framework for AI governance in HR includes three pillars: Notice, Consent, and Audit. This framework is our commitment to ethical, transparent, and effective AI use, guiding HR towards a future where innovation aligns with responsibility and inclusivity. To learn more about HiredScore’s framework and the responsible and trustworthy use of AI in HR, watch the full webinar embedded above.