How the EU's new AI Act will affect businesses [Q&A]
The European Union first proposed introducing a regulatory framework for AI back in 2021, the wheels of politics inevitably grind slowly, however, and it's still working on legislation to bolster regulations on the development and use of artificial intelligence.
The proposed Artificial Intelligence Act has sparked a good deal of debate in the industry with many worried that it could harm business competitiveness.
We spoke to Steve Wilson, chief product officer at Contrast Security and project lead for the OWASP Top 10 for Large Language Model (LLM) AI Applications, to find out more about the proposed act and what it means for enterprises.
BN: The EU AI Act, set to be ratified next year, is a groundbreaking piece of legislation in the realm of AI. How do you foresee this act affecting the global AI landscape once it comes into effect?
SW: The EU AI Act represents a major stride toward standardized, comprehensive regulation in the world of AI, providing a framework that tries to balance innovation with user protection. Once it's implemented, I anticipate a ripple effect across the global AI landscape. For one, it will likely inspire similar legislative efforts in other countries -- much as GDPR did. More importantly, international companies dealing with AI will have to align with the AI Act's provisions to do business within the EU, creating a de-facto global standard. The AI Act may also drive increased transparency and accountability in AI, as companies strive to meet its stringent standards.
BN: The EU AI Act is aims to regulate the people and processes involved in AI development, rather than the AI models themselves. What are the potential challenges and benefits you anticipate with this approach, particularly for AI developers and companies?
SW: Focusing on the people and processes involved in AI development makes the AI Act adaptable and scalable, given the fast-paced evolution of AI models. But, this approach comes with challenges. It necessitates a shift in practices for developers and companies, demanding increased documentation, review, and monitoring. It may also require significant investment in education and training to ensure professionals understand and comply with the AI Act.
However, the benefits outweigh the challenges. The AI Act can inspire the creation of robust frameworks for AI development, fostering a culture of transparency and accountability. Our work on the OWASP Top 10 for Large Language Models is one such framework -- in this case aimed at development practices for securing this new breed of AI applications. It may also drive a shift towards more ethical AI practices, which can bolster public trust in AI systems - a critical factor for their wider acceptance and adoption.
BN: How are you preparing your solutions and customers for the potential implications of the EU AI Act? What specific aspects of the Act do you see as most impactful for businesses in the software safety space?
SW: At Contrast Security, we are acutely aware that our role extends beyond being a software security provider. As the AI landscape evolves, we recognize that we must lead the way in ensuring and proving compliance with regulatory requirements, such as those proposed in the EU AI Act. To this end, we are actively engaging in standards efforts, like the OWASP Top 10 for Large Language Models, which provides a crucial framework for assessing the security of AI systems. This initiative not only aids us in enhancing our own solutions but also equips us to better guide our customers through this increasingly complex regulatory environment. We’re also planning to introduce our first new features aimed at secure use of Large Language Model AIs in the near future.
The EU AI Act introduces several elements that will be impactful for businesses in the software safety space. These include:
- Transparency obligations: The AI Act requires operators of AI systems to provide clear and adequate information to users about the AI system’s functionality, including its capabilities and limitations. This might require businesses to make significant changes to their interfaces, user agreements, and communication methods.
- High-risk AI systems: The AI Act establishes stricter regulations for AI systems considered 'high risk'. Many software safety applications could fall under this designation, necessitating rigorous assessment and continuous monitoring.
- Record-keeping, documentation, and reporting: To demonstrate compliance, businesses will need to keep detailed records and produce thorough documentation related to the design and purpose of their AI systems. This will call for comprehensive data management strategies and possibly additional personnel or resources.
- Quality management systems: Businesses will need to establish a quality management system and designate a person responsible for compliance. This could influence hiring and training within the organization.
Given these aspects of the EU AI Act, it's clear that businesses in the software safety space will need to proactively adapt their strategies, resources, and solutions to effectively navigate this new regulatory landscape.
BN: As Project Lead for the OWASP Top 10 for Large Language Models project, how does this proposed legislation influence your approach to identifying and addressing vulnerabilities in LLM applications?
SW: Our initiative was already underway before the EU draft regulations, but our mission aligns seamlessly with the AI Act's intent -- forming a technical, foundational reference for these broader regulatory endeavors.
The AI Act indeed underscores the necessity of our work, which is primarily aimed at developers and security experts involved in creating applications that leverage LLM technologies. We provide these groups with actionable, clear, and concise security guidance to ensure that AI is used responsibly and securely. This encompasses not just traditional developers but also the increasing population of citizen developers -- those who might be newer to the field but are handling critical data and services.
The AI Act's emphasis on developers' and AI operators' accountability resonates with our focus. It contributes valuable context to our mission, guiding us in identifying and addressing vulnerabilities in LLM applications. We aim to augment these legislative efforts by providing more granular, technical guidance based on our continuous examination of the LLM ecosystem and the real threats we see in terms of data security and privacy.
BN: How do you think the EU AI Act could influence other nations in terms of proposing similar legislation? What lessons can other countries take away from this European initiative?
SW: The EU AI Act, in its pursuit of creating a unified legal framework for AI, serves as a significant blueprint for other nations seeking to regulate AI. Its emphasis on transparency, accountability, and risk-based regulatory approach could inspire similar principles in upcoming legislations worldwide.
For instance, the Office of Science and Technology Policy in the United States is developing a National Artificial Intelligence Strategy. The thoroughness of the EU AI Act, particularly in its categorization of AI systems based on risk levels, might provide a robust reference point for US policymakers. The AI Act's emphasis on fundamental rights and transparency could encourage the US to adopt similar perspectives as they continue to shape their national AI strategy.
Furthermore, New York City's AI Law focusing on bias audits for Automated Employment Decision Tools (AEDTs) indicates a burgeoning interest in AI accountability. The EU AI Act, with its focus on high-risk AI systems and requirements for impact assessments, might offer valuable lessons to enhance these efforts and extend similar requirements to other sectors beyond recruitment.
The AI Act's emphasis on not just the technology itself, but also the processes and people involved in AI development, underscores the importance of a holistic view. This perspective could guide other nations in ensuring their AI regulations cover not just the output but also the integrity of the AI development process, thereby fostering a culture of responsible AI development.
Another significant takeaway is the EU's proactive stance in engaging with the broader AI community, inviting public comments, and striving for regulatory harmonization across its member states. This inclusive and collaborative approach could serve as a model for other nations seeking to enact their AI regulations.
In conclusion, the EU AI Act, with its comprehensive and balanced approach to AI regulation, sets a significant precedent. Other nations can derive valuable lessons from this European initiative, using it as a reference point in their own legislative efforts, thereby fostering a global convergence towards responsible and secure AI practices.
Image credit: paparazzza / Shutterstock