The XR5.0 project on ‘Human-Centric AI-Enabled Extended Reality Applications for the Industry 5.0 Era,’ aims to develop, demonstrate, and validate a new Person-Centric and AI-based extended reality (XR) framework, specifically designed for Industry 5.0 applications. It focuses on establishing foundational principles and blueprints for integrating XR technologies in Industry 5.0 settings. A significant aspect of the project is the creation of innovative ‘XR-made-in-Europe’ technology, designed to integrate seamlessly with human-centric manufacturing approaches and align with European values.
Additionally, the XR5.0 project emphasizes the integration of human-centered digital twins with advanced XR and artificial intelligence (AI) technologies, including explainable AI (XAI), generative AI (GenAI), active learning (AL), and neurosymbolic learning. This integration supports a cloud-based training platform that provides ergonomic, personalized training for industrial workers, aligning with Industry 5.0 standards.
CORCOM plays a crucial role in Task 1.4, which aims to ensure that the project adheres to relevant ethical and legal standards. This task involves developing a detailed framework that emphasizes stringent legal protocols for personal data processing in compliance with the EU General Data Protection Regulation (GDPR) and provisions from legislation on cybersecurity, such as the Cybersecurity Act and the Cyber Resilience Act.
Additionally, this framework will extend beyond mere legal compliance to thoroughly address the broader ethical considerations associated with AI. It will guide the development and utilization of AI technologies in ways that uphold human values and societal norms. Furthermore, the framework will include a detailed analysis of how the recently enacted EU AI Act, as of March 2024, influences the project’s design architecture and the integration of its AI systems. This approach ensures that the project remains compliant with current regulations while also addressing ethical concerns, thus laying a solid foundation for responsible AI development.
Recognized as the world’s first extensive regulatory framework for AI, the AI Act advocates for a “human-centric” approach. This approach ensures the safety of AI systems within the European market, upholds fundamental rights and values, and encourages investment and innovation in AI throughout Europe. The AI Act classifies AI systems into four risk categories—unacceptable risk, high risk, limited risk, and minimal or no risk—and sets forth specific rules and requirements for each. Using a risk-based approach, the AI Act evaluates AI models to determine their compliance obligations and calls for the prohibition of particularly hazardous AI systems to prevent potential damages.
Key mandates for high-risk AI systems under the AI Act include requirements for data quality, record keeping, transparency, human oversight, accuracy, cybersecurity, quality management, and conformity assessment. Both AI providers and deployers are obliged to comply with regulations tailored to their specific roles and must carry out audits to ensure proper AI usage. These measures will enable proactive compliance with AI regulations, enhance risk management effectiveness, and ensure sustained adherence to the AI Act.
Furthermore, to ensure effective governance and risk management for XR and AI within the XR5.0 project, it is imperative to keep pace with the rapidly evolving legal framework concerning AI liability. The legal domain pertaining to how liability laws apply to AI is changing rapidly. The EU has introduced two key proposals aimed at streamlining the process of establishing liability and securing compensation for AI-related damages: the Amendment to the Product Liability Directive and the AI Liability Directive. These legislative proposals are significant steps forward in mitigating legal risks associated with AI systems and emphasize the importance of compliance with new regulations.
For project partners developing AI-integrated XR products, it is essential to implement rigorous supervision, testing, and monitoring procedures to assess and mitigate potential impacts effectively. This proactive approach not only aligns with legal requirements but also enhances the safety and reliability of AI applications in XR technologies.
In upcoming articles, we will outline the key guidelines, standards, and risk management frameworks essential for developing AI risk models tailored to specific projects like XR5.0. It is crucial for all project partners and stakeholders to familiarize themselves with these to effectively integrate applicable elements and manage AI usage risks comprehensively. Providers and deployers of AI systems must adhere to these frameworks, recognizing that there is no universal solution. Effective risk management involves combining and customizing various standards and frameworks to fit the unique requirements of each XR and AI system.
Written by Marcelo Corrales Compagnucci