Workplan and achievements 2023
The Counterfactual Assessment and Valuation for Awareness Architecture (CAVAA) project centers on the concept that awareness is vital for survival in a world of unseen elements and internal dynamics of agents and moral norms. This awareness combines perceptual data, memory, and inferred elements to form an internal virtual world. CAVAA aims to develop an integrated computational architecture to understand and engineer awareness in both biological and technological systems. The project will focus on perception, memory, virtualization, simulation, and integration, and apply these in robots and artificial agents across various use-cases like robot foraging, social robotics, and computer game benchmarks. These scenarios will test trade-offs such as efficiency versus robustness and gauge user acceptance.
The first year of the CAVAA project was characterized by both technical development and integration and deep theoretical discussions and conceptual refinement. A particularly significant achievement was the successful convergence of CAVAA partners during the DCBT 2023 Summer School at SRU, resulting in an intensive technical integration of the core computational models (WP1, WP2), setting the stage for subsequent developments and benchmarks (WP4). These collaborative sessions yielded a series of technical advances, with a special emphasis on the development, integration, and testing of the first modules of the CAVAA cognitive architecture. Core developments include the implementation of a reactive layer that emphasizes self-regulation, the enhancement of spatially-tuned features for efficient trajectory planning, and significant strides in episodic reinforcement learning, showcasing superior learning speed and memory storage. Furthermore, the first technical integration of various computational models, such as the Sequential Episodic Control with Model-Based RL, represents the first major step towards building CAVAA’s cognitive architecture. Work on WP3 also advanced the exploration of various robotic simulations, interface designs, and integration strategies. Key achievements include the clear definition of the types of information to be exchanged and the level of abstraction necessary for effective operation of these agents, facilitating the exchange of perception and motor control information between the robotic simulator and CAVAA’s computational models. Concurrently, a series of online scientific seminars and meetings held during the year culminated in the development of the first draft of a joint theoretical research paper, steered by UU, which worked to hone the project's theoretical underpinnings, offering clarity on consciousness and awareness definitions and their empirical validations. This manuscript serves as a foundational stone, linking theoretical postulations to real-world benchmarks, and thus paving the way for the project's subsequent phases. Collectively, these advancements not only reinforce the project's objectives but also exemplify the potent blend of theory, practice, and innovation that CAVAA champions.
Key developments in the first year demonstrate progress in developing the CAVAA cognitive architecture, with a focus on redefining decision-making and action selection in AI systems leveraging awareness. Notable achievements include the Whitened Sparse Autoencoder, converting perceptual states into discrete memory events, and the Sequential Episodic Control (SEC) model, advancing episodic reinforcement learning. SEC's sequential chaining of episodic memories enhances learning speed and memory capacity, setting a new benchmark in efficiency. Model-Based RL advancements, incorporating techniques like Prioritized Sweeping and Bidirectional Search, demonstrate adaptability and curiosity-driven learning in dynamic environments.
The integration of SEC and Model-Based RL further advances CAVAA's cognitive architecture, exemplifying flexible decision-making in dynamic scenarios. Theoretical progress, notably in the "Artificial Awareness" manuscript, differentiates 'consciousness' and 'awareness' and introduces consciousness profiles. This novel approach contributes to discussions on artificial awareness, offering new perspectives and validation methods.
CAVAA's scientific outcomes provide tangible advancements that support and enhance several EU initiatives:
- Artificial Intelligence Act: CAVAA's emphasis on developing empathetic and trustworthy AI systems complements the AI Act's focus on harm reduction and responsible AI monitoring. By engineering algorithms that are able to assess the potential consequences of their future actions through the virtualization of possible futures, CAVAA offers a proactive approach to address the physical and mental harm concerns outlined in the Act.
- Digital Europe Programme: The project aligns with the Programme’s objective to reinforce Europe's strategic digital capacities by advancing the state-of-the-art in AI through the development of an integrated computational architecture that enhances AI transparency and reliability.
- European Strategy for Data: By fostering open science and ensuring data and code availability, CAVAA contributes to the Strategy's goal of making the EU a role model for a society empowered by data to make better decisions—in business and the public sector.
- Horizon Europe: CAVAA's commitment to interdisciplinary research and open access publications directly supports Horizon Europe's mission to deliver scientific excellence and provide access to new knowledge that drives societal changes.
- Ethics Guidelines for Trustworthy AI: The ethical framework established within CAVAA reflects the guidelines' principles for ensuring AI systems are lawful, ethical, and robust, from their inception through to their deployment, in line with EU values and fundamental rights.
In parallel, partners have designed and are piloting a study on laypeople’s attitudes toward sharing data with artificially intelligent systems, depending on their beliefs about its conscious awareness (degree and type), with a plan to conduct a representative survey in the United Kingdom. Once the materials and stimuli have been honed following the pilot results, this data should inform policymakers of the relevance of lay attitudes about sharing potentially sensitive information with AI systems and how this may be affected by their (mis)understanding of the type of awareness such systems may have of their internal states. Within the consortium partnership, this will also be assessed taking into account EU policy priorities.