PINEBERRY: Building Secure and Explainable AI for Space Missions
At KP Labs, we are constantly pushing the boundaries of innovation in the space sector. That is why we are so thrilled about our latest collaboration with the European Space Agency (ESA), European Space Operations Centre (ESOC), European Space Research and Technology Centre (ESTEC) and the MI2.AI team from Warsaw University of Technology. The PINEBERRY initiative is aimed at enhancing the safety, security, and transparency of artificial intelligence (AI) systems in space operations. The project also underlines the critical role of AI in the space sector and fosters understanding across different institutions. The collaboration between a space agency, a university, and a private space company demonstrates the collective commitment to addressing the unique challenges posed by autonomous systems in the harsh environment of space and highlights the importance of AI in the future of space exploration.
PINEBERRY: Explainable and Secure AI
PINEBERRY stands for Explainable, Robust, and Secure AI for Demystifying Space Mission Operations. The project focuses on two foundational pillars: Explainable AI (XAI) and Secure AI (SAI). These elements are essential for addressing the challenges posed by the increasing reliance on AI in space missions. XAI ensures that AI systems remain transparent, providing human operators with clear insights into the decision-making processes of autonomous systems. SAI, on the other hand, safeguards these systems against threats that could compromise mission success, such as data corruption or adversarial attacks. The project also emphasizes that XAI is designed to explain AI models to users, researchers, and developers, helping them understand how these systems work and increasing trust in their reliability. Similarly, SAI protects systems from vulnerabilities like data poisoning, prompt injection attacks, or overreliance.
Why Security Matters?
Space missions rely heavily on the integrity of the data transmitted from spacecraft to Earth. A single instance of data corruption—whether intentional or accidental—can disrupt mission-critical decisions, affecting everything from spacecraft navigation to scientific experiments. Furthermore, secure management of mission operations is essential to prevent unauthorized access or manipulation of AI systems, particularly in autonomous or semi-autonomous space operations. At PINEBERRY, we introduce techniques like data sanitization, robust training strategies (such as ensembling), and continuous model monitoring to address these vulnerabilities. These measures aim to ensure reliability and security across a wide range of AI applications, from anomaly detection and telemetry analysis to supporting broader mission-critical tasks.
Building Trust Through Explainability
Transparency is just as important as security when it comes to the adoption of AI in space missions. ESA controllers need to understand and trust the decisions made by AI systems, especially in high-stakes scenarios where human oversight is limited. Through the use of XAI techniques, at PINEBERRY we address the "black-box" nature of many AI models by explaining how they process inputs and arrive at decisions. These explanations are tailored for different data modalities, such as time series telemetry, text, and computer vision data, enabling ESA controllers to better understand anomalies and validate AI actions.
The PINEBERRY Consortium
As the technical leader of the PINEBERRY project, at KP Labs we developed comprehensive frameworks for identifying and mitigating risks to AI systems, ensuring they can operate securely in even the most challenging conditions. The project also includes the creation of catalogues that map security risks and explainability techniques to real-world applications, providing guidelines for mitigating issues specific to space missions. These catalogues, available at https://assurance-ai.space-codev.org/materials, will serve as a resource for developers and operators working with AI in the space domain.
Krzysztof Kotowski, Project Leader at KP Labs, says:
"In the realm of space exploration, every decision made by autonomous systems must be secure and transparent. PINEBERRY is our answer to the growing need for trustworthy AI solutions that can operate safely in the harshest environments of space. By combining security with explainability, we're setting new standards for AI in space operations. The project marks a significant milestone for us, as it highlights ESA’s recognition of our expertise in AI for space missions. It also reflects the trust and confidence that ESA places in our capabilities".
What’s important, PINEBERRY exemplifies the necessity of collaboration in the space sector. We develop our models, ESA oversees the project, ensuring alignment with its mission objectives and European space standards. The Warsaw University of Technology’s team contributes advanced research in AI security and explainability, while ESA ESOC validates the project’s frameworks through mission scenarios. What’s important, this collaboration also serves an educational purpose, establishing a baseline for the development of secure and explainable AI techniques in modern space missions.
Applications
PINEBERRY’s innovations are demonstrated through developed for the project, hypothetical future mission scenarios that highlight both its security and explainability frameworks. For example, the Helios-9 mission scenario illustrates how advanced AI security measures can detect and mitigate threats in real time, while the Odyssey mission showcases how explainable AI enables operators to validate and refine AI-driven decisions during autonomous operations. Additionally, five proof-of-concept (PoC) applications have been created to address specific challenges in computer vision, time series data, and natural language processing. These PoCs explore opportunities for AI developers, address potential vulnerabilities, and demonstrate mitigations tailored to ESA’s mission requirements. The code of applications will be published later this year.
Professor Przemysław Biecek from the Warsaw University of Technology explains:
"The complexity of space operations requires AI systems that are not only advanced but also comprehensible. With PINEBERRY, we’re creating tools that demystify AI behavior, ensuring operators can trust and rely on these systems in critical scenarios."
PINEBERRY’s principles will also be explored in a series of "Secure Your AI" data science challenges planned for the second and third quarters of 2025. These challenges, including tasks titled "Fake or Real" and "Data Heist," will invite participants to address real-world scenarios by both hacking and securing large language models (LLMs) and AI models for satellite telemetry analysis. The challenges aim to tackle pressing security risks such as data leakage, data poisoning, overreliance on AI, and the presence of AI trojan horses. This initiative will culminate in a grand Hackathon Day, providing a unique opportunity to apply new knowledge in practice, fostering innovation and collaboration within the space and AI communities.
Visit assurance-ai.space-codev.org to learn more about how PINEBERRY is shaping the future of AI in space operations.
More news
Stay informed with our latest blog posts.