6 Key Steps Businesses Should Take to Prepare for AI Regulation
Insights from the EU Digital Services Act and the Biden-Harris Administration's AI Policy
As AI continues to transform industries and impact our daily lives, ensuring its ethical and responsible development has become a priority for regulators worldwide. The European Union has proposed the Digital Services Act (DSA), which aims to create a comprehensive legal framework for digital services, including transparency, accountability, and user protection[1]. In the United States, the Biden-Harris Administration recently announced new actions to promote responsible AI innovation that protects Americans' rights and safety[2]. Proactive compliance with AI regulations is crucial for businesses, and early preparation can help companies navigate this evolving landscape.
Common trends in AI regulation across the EU and the US include an emphasis on transparency, fairness, accountability, and data protection. Both regions are focusing on creating legal frameworks that protect users' rights while fostering innovation. In the EU, the DSA is expected to be adopted in late 2023, with implementation likely to begin in 2024[3]. In the US, the Biden-Harris Administration's AI policy is currently under development, with more specific regulations expected to follow in the coming months or years.
This article outlines six key steps businesses can take to be ready for the upcoming AI regulations outlined by the EU Digital Services Act and the Biden-Harris Administration's AI policy.
Develop an AI ethics policy and establish clear lines of accountability
Crafting a robust AI ethics policy that aligns with the principles of the DSA and the Biden-Harris Administration's AI policy is essential for businesses. This policy should cover aspects such as transparency, fairness, and accountability. Assigning responsibility for AI systems to specific individuals or teams is crucial, ensuring that someone is accountable for ethical and regulatory compliance. For example, Google has published their AI principles[4], outlining their commitment to ethical AI development, and Microsoft has created an AI and Ethics in Engineering and Research (AETHER) Committee[5] to oversee AI ethics and policy. By examining these examples, businesses can gain insights into how to develop their AI ethics policies and establish lines of accountability.
Prioritize data protection and privacy
Data protection and privacy are key components of both the DSA and the Biden-Harris Administration's AI policy. Companies should conduct data protection impact assessments to identify potential risks and implement appropriate measures to address them. Appointing a data protection officer can also help ensure compliance with data protection regulations. Several companies have already implemented robust data protection and privacy measures. For example, Apple has built privacy protections into their AI-driven products like Siri and continues to prioritize user privacy[6]. Similarly, IBM has established the IBM Privacy Principles[7], outlining their commitment to data privacy and protection. Studying these examples can provide valuable insights for businesses looking to prioritize data protection and privacy in their AI systems.
Implement transparent and explainable AI systems
Transparency and explainability are essential aspects of responsible AI development, as outlined in both the DSA and the Biden-Harris Administration's AI policy. Companies should work towards developing AI systems that can be easily understood by users and regulators. OpenAI, for instance, is committed to providing public goods that help society navigate the path to AGI, including publishing most of their AI research[9]. IBM has also developed the AI Explainability 360 toolkit[10], which includes algorithms and resources to help users understand how AI models make decisions. By examining these examples, businesses can understand the importance of transparent and explainable AI systems and learn how to implement them in their operations.
Ensure AI systems are fair and unbiased
Mitigating biases in AI systems is an important aspect of both the DSA and the Biden-Harris Administration's AI policy. Companies should strive to create AI systems that make fair decisions, free from discrimination based on factors such as race, gender, or age. Google's AI research division, Google Brain, has been working on developing techniques to reduce bias in AI systems[11]. Similarly, IBM's Fairness 360 toolkit[12] provides metrics and algorithms to help detect and mitigate biases in AI models. By studying these examples, businesses can gain insights into the importance of creating fair and unbiased AI systems and learn how to integrate such practices into their own AI development.
Develop a risk management framework for AI
Creating a comprehensive risk management framework is crucial for businesses to anticipate and address the potential risks associated with AI technologies, as suggested by both the DSA and the Biden-Harris Administration's AI policy. Companies like Accenture have developed a framework for managing AI risks called the "Responsible AI Framework"[13], which can serve as an example for businesses seeking to create their own risk management strategies. Additionally, Deloitte has published an AI risk management guide[14] to help organizations understand and mitigate potential risks. By reviewing these resources, businesses can gain insights into how to develop and implement a robust risk management framework tailored to their specific AI applications.
Train employees on AI ethics and compliance
Training employees in AI ethics and compliance is essential for ensuring that businesses adhere to the principles laid out in the DSA and the Biden-Harris Administration's AI policy. Companies should provide regular training sessions and workshops to help employees understand the ethical implications of AI systems and how to apply best practices. For example, Google offers an internal AI ethics training program for its employees[15], and Microsoft has created a comprehensive AI Business School[16] to teach AI ethics and best practices. By exploring these initiatives, businesses can better understand how to implement AI ethics and compliance training programs within their organizations.
Conclusion
Businesses must proactively prepare for AI regulation to ensure compliance with the EU Digital Services Act, the White House memo, and future AI regulations. By taking these six key steps, companies can position themselves for success in the rapidly evolving AI landscape, fostering innovation while adhering to ethical and regulatory standards.
Disclaimer: The content presented in this article is for informational purposes only and should not be construed as legal advice. The views and opinions expressed herein are solely those of the author and do not necessarily represent the views or opinions of any other person or organization. This article is not intended to provide, and should not be relied upon for, legal advice in any particular circumstance or jurisdiction. Consult with an attorney or other professional advisor for guidance specific to your situation.
References:
[1] European Commission. (2020). Proposal for a Regulation of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) and amending Directive 2000/31/EC. Retrieved from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52020PC0825
[2] The White House. (2023). Fact Sheet: Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation That Protects Americans' Rights and Safety. Retrieved from https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/
[3] Scott, M. (2022, January 19). EU's digital rule book faces delays until 2024. POLITICO. Retrieved from https://www.politico.eu/article/eu-digital-rule-book-faces-delays-until-2024/
[4] Google. (2018). AI at Google: Our Principles. Retrieved from https://www.blog.google/technology/ai/ai-principles/
[5] Microsoft. (n.d.). Responsible AI. Retrieved from https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6
[6] Apple. (n.d.). Privacy. Retrieved from https://www.apple.com/privacy/
[7] IBM. (n.d.). IBM Privacy Principles. Retrieved from https://www.ibm.com/privacy/us/en/privacyprinciples.html
[8] Scott, M. (2022, January 19). EU's digital rule book faces delays until 2024. POLITICO. Retrieved from https://www.politico.eu/article/eu-digital-rule-book-faces-delays-until-2024/
[9] OpenAI. (n.d.). OpenAI Charter. Retrieved from https://openai.com/charter/
[10] IBM Research. (n.d.). AI Explainability 360. Retrieved from https://aix360.mybluemix.net/
[11] Google AI. (n.d.). Responsible AI Practices. Retrieved from https://ai.google/responsibilities/responsible-ai-practices/
[12] IBM Research. (n.d.). AI Fairness 360. Retrieved from http://aif360.mybluemix.net/
[13] Accenture. (2021). Responsible AI Framework. Retrieved from https://www.accenture.com/_acnmedia/PDF-121/Accenture-Responsible-AI-Framework-POV.pdf
[14] Deloitte. (n.d.). Managing AI Risk: A Guide for Business. Retrieved from https://www2.deloitte.com/content/dam/Deloitte/us/Documents/risk/us-aers-managing-ai-and-ml-model-risk.pdf
[15] Metz, C. (2018, April 9). Google Employees Protest Secret Work on Censored Search Engine for China. The New York Times. Retrieved from https://www.nytimes.com/2018/08/16/technology/google-employees-protest-search-censored-china.html
[16] Microsoft. (n.d.). AI Business School. Retrieved from https://www.microsoft.com/en-us/ai/ai-business-school