Artificial Intelligence (AI) has become an indispensable tool across various industries, revolutionizing the way decisions are made and problems are solved. The integration of AI technologies in the space sector has ushered in a new era of exploration, innovation, and efficiency, with its capabilities being applicable to mission planning, satellite data processing, debris mitigation, command and monitoring of space objects, among others.

In the EU, the proposed AI Act was formally approved by the European Parliament on 14 March, bringing a set of requirements and prohibitions relating to the development, provision and deployment of AI systems and models. With its entry into force, all providers and deployers of AI systems under the AI Act will be subject to obligations relating to AI literacy: they will have to take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. In addition, the AI Act takes a risk-based approach to AI, establishing (i) prohibited practices, (ii) requirements for high-risk systems (which include, for instance, AI systems intended to be used as safety components in the management and operation of critical digital infrastructure – such as cloud computing and data centre providers –, road traffic, or in the supply of water, gas, heating or electricity), (iii) transparency obligations for certain AI systems (e.g., systems generating synthetic audio, image, video or text content) and (iv) obligations for general-purpose AI (GPAI) models, such as large generative AI models. Though the space sector is not expressly referred to in the AI Act (space systems are not, notably, listed as critical infrastructure), AI Act obligations will apply to actors in the space sector that develop, provide or deploy AI systems for the purposes therein indicated. In addition, even for space actors deploying AI and not subject to the AI Act, performance of certain actions (such as, e.g., AI impact assessments) may become a best or usual practice, including if, for instance, insurance companies so require.

A Proposal for the AI Liability Act is also being discussed, establishing rules applicable to civil liability with relation to burden of proof and to disclosure of evidence. Its application to the space sector, including its coordination with the liability provisions arising from the UN Space Treaties and existing national space legislation, is thus a topic that will need to be assessed carefully by space actors.

Though many countries worldwide are approving AI strategies and approaches, the ambitious regulatory approach of the EU is unparalleled. Yet, given the autonomous behaviour and adaptiveness of AI systems, their use in the space sector will inevitably raise challenges, not the least when it comes to liability.