GENERAL INTELLIGENCE REGULATION

Introduction

The prospective emergence of artificial general intelligence (Artificial general intelligence) constitutes a transformative juncture in the history of technological development, raising profound questions concerning governance, regulation the future of human socio-political organisation. Unlike narrow artificial intelligence systems, which are confined to discrete functional domains, Artificial general intelligence is characterised by its capacity for generalised cognition, enabling performance across a broad spectrum of intellectual tasks at or beyond human levels. This white paper provides an extensive and analytically rigorous exploration of the governance challenges posed by Artificial general intelligence, situating current regulatory developments within broader theoretical, legal institutional contexts. It critically evaluates existing frameworks, including the European Union’s risk-based regulatory model and the United Kingdom’s principles-based approach, while interrogating their adequacy in the face of general-purpose and potentially autonomous systems. Particular attention is devoted to the interplay between national sovereignty and global coordination, the role of private actors in shaping governance outcomes the ethical foundations underpinning regulatory efforts. The paper ultimately advances a multi-layered governance model that integrates domestic, regional international mechanisms, alongside technical standards and institutional innovations, as necessary components of an effective response to the unprecedented opportunities and risks associated with Artificial general intelligence.

The Rise of AI Governance as a Public Policy Challenge

The governance of artificial intelligence has rapidly ascended from a specialised concern within technical and legal scholarship to a central issue of global public policy, reflecting the increasing integration of algorithmic systems into economic, political social infrastructures. This transformation is not merely incremental but paradigmatic, insofar as emerging developments in machine learning, large-scale data processing computational architecture suggest the plausible realisation of artificial general intelligence, a form of machine cognition capable of performing any intellectual task that a human being can undertake. The implications of such a development extend far beyond conventional regulatory concerns, implicating foundational questions regarding agency, authority, accountability the distribution of power within and between societies. While existing governance frameworks have been designed with narrow or domain-specific systems in mind, the generality, autonomy scalability of Artificial general intelligence render these frameworks increasingly inadequate, thereby necessitating a comprehensive re-evaluation of regulatory paradigms and institutional arrangements.

The urgency of this task is amplified by the accelerating pace of technological innovation and the concentration of advanced AI capabilities within a relatively small number of corporate and state actors, creating asymmetries of power that challenge traditional mechanisms of democratic oversight and international governance. At the same time, the global nature of AI development and deployment complicates efforts to establish coherent regulatory regimes, as divergent national interests, legal traditions strategic priorities impede the emergence of harmonised approaches. In this context, the governance of Artificial general intelligence must be understood not merely as a technical or legal problem but as a complex socio-political challenge requiring interdisciplinary analysis and coordinated action across multiple levels of authority.

Defining Artificial General Intelligence for Regulatory Purposes

A precise understanding of artificial general intelligence is essential for the development of appropriate governance frameworks, yet the concept itself remains contested within both technical and philosophical discourse. Broadly construed, Artificial general intelligence denotes systems endowed with general cognitive abilities, encompassing reasoning, learning, problem-solving adaptability across diverse contexts without task-specific reprogramming. This stands in contrast to narrow AI, which, despite achieving superhuman performance in specific domains, lacks the flexibility and transferability characteristic of human intelligence. From a regulatory perspective, the defining features of Artificial general intelligence, generality, autonomy, opacity scalability, collectively undermine the assumptions underpinning existing governance models, which typically rely on clear delineations of function, predictable system behaviour identifiable loci of control.

The generality of Artificial general intelligence implies that regulatory approaches cannot be confined to sector-specific applications, as the same underlying system may be deployed across multiple domains with varying risk profiles. Autonomy, in turn, raises fundamental questions regarding responsibility and liability, as decision-making processes may occur independently of direct human oversight. The opacity of advanced machine learning systems, particularly those based on deep neural networks, complicates efforts to ensure transparency and accountability, while the scalability of digital technologies enables rapid and widespread deployment, amplifying both benefits and potential harms. These characteristics necessitate a shift from static, rule-based regulation towards more dynamic and adaptive governance mechanisms capable of responding to evolving technological capabilities.

Risk Landscapes and Regulatory Justification

The governance of Artificial general intelligence must be grounded in a comprehensive assessment of its associated risks, which extend across technical, societal, ethical existential dimensions. At the technical level, concerns arise regarding system reliability, robustness alignment, particularly in scenarios where complex interactions between components give rise to emergent behaviours that are difficult to predict or control. The problem of alignment, in particular, has attracted significant attention within the research community, as it highlights the challenge of ensuring that highly capable systems act in accordance with human values and intentions, even in the absence of explicit instructions.

Societal risks encompass the potential for widespread economic disruption, as automation driven by advanced AI systems displaces human labour across a broad range of occupations, exacerbating inequality and undermining social cohesion. The concentration of AI capabilities within a small number of firms further intensifies these concerns, as it may lead to the entrenchment of monopolistic structures and the erosion of competitive markets. In addition, the capacity of AI systems to generate and disseminate information at scale raises significant challenges for democratic governance, including the manipulation of public opinion, the proliferation of misinformation the erosion of trust in institutions.

Ethical considerations are equally salient, encompassing issues such as bias, discrimination, privacy human dignity. The deployment of AI systems in sensitive domains, including healthcare, criminal justice public administration, amplifies the consequences of these concerns, necessitating robust safeguards to ensure fairness and accountability. Beyond these immediate risks, the prospect of Artificial general intelligence introduces the possibility of existential threats, particularly in scenarios where highly autonomous systems act in ways that are misaligned with human interests on a systemic scale. While such scenarios remain speculative, their potential magnitude warrants serious consideration within governance frameworks.

Contemporary Regulatory Paradigms

Contemporary approaches to AI regulation can be broadly categorised into risk-based, principles-based contextual paradigms, each of which offers distinct advantages and limitations in addressing the challenges posed by Artificial general intelligence. The risk-based approach, exemplified by the European Union’s Artificial Intelligence Act, seeks to classify systems according to their potential for harm, imposing proportionate obligations on developers and deployers. While this model provides a structured and ostensibly comprehensive framework, its reliance on predefined categories may prove inadequate in the context of general-purpose systems whose applications and risk profiles evolve over time.

In contrast, the principles-based approach adopted by the United Kingdom emphasises flexibility and adaptability, relying on high-level normative principles such as transparency, accountability fairness to guide regulatory practice. This approach allows for greater responsiveness to technological change but may lack the specificity and enforceability required to address complex and high-stakes risks associated with Artificial general intelligence. Contextual regulation, which tailors governance to specific use cases and domains, offers a potential middle ground, yet it too faces challenges in accommodating the generality and cross-sectoral applicability of advanced AI systems.

These limitations underscore the need for hybrid regulatory models that integrate elements of each paradigm, combining the clarity and enforceability of risk-based approaches with the flexibility of principles-based frameworks and the specificity of contextual regulation. Such models must be complemented by robust institutional mechanisms capable of interpreting and applying regulatory standards in dynamic and uncertain environments.

Institutional Architecture and Enforcement

The effectiveness of any regulatory framework depends not only on its substantive provisions but also on the institutional structures responsible for its implementation and enforcement. In the European Union, the governance of AI is characterised by a multi-layered architecture that integrates supranational oversight with national enforcement, reflecting the broader principles of EU governance. The establishment of specialised bodies, including an AI Office and advisory panels, represents an effort to centralise expertise and coordinate regulatory action across Member States, thereby enhancing consistency and effectiveness.

The United Kingdom, by contrast, has adopted a more decentralised approach, leverartificial general intelligenceng existing regulatory institutions while promoting coordination through central bodies such as the AI Security Institute. This model reflects a strategic emphasis on innovation and economic competitiveness, seeking to avoid the perceived rigidity of comprehensive legislative frameworks. However, it also raises questions regarding coherence and accountability, particularly in the context of rapidly evolving technologies that may outpace the capacity of sector-specific regulators.

At the international level, efforts to establish governance mechanisms remain nascent and fragmented, encompassing a range of initiatives led by organisations such as the United Nations and the Council of Europe. These efforts reflect a growing recognition of the need for global coordination, yet they are constrained by geopolitical tensions, divergent regulatory philosophies the absence of binding enforcement mechanisms. The development of effective international governance frameworks will therefore require not only technical and legal innovation but also sustained diplomatic engagement and trust-building among states.

General-Purpose Systems and Regulatory Complexity

General-purpose AI systems, including foundation models, occupy a central position in contemporary debates on AI governance, as their versatility and adaptability challenge traditional regulatory approaches. Unlike application-specific systems, general-purpose models can be integrated into a wide range of contexts, often in ways that are not anticipated at the point of development. This raises significant questions regarding the allocation of responsibility between developers, deployers end-users, as well as the appropriate scope of regulatory obligations.

Recent regulatory initiatives have sought to address these challenges by introducing specific provisions for general-purpose AI, including requirements related to transparency, documentation risk management. However, the effectiveness of these measures remains uncertain, particularly in light of the rapid pace of technological development and the increasing complexity of AI systems. The governance of general-purpose AI therefore represents a critical test case for broader efforts to regulate Artificial general intelligence, highlighting the need for flexible and adaptive frameworks capable of accommodating uncertainty and change.

Private Power, Capacity Gaps and Regulatory Capture

The enforcement of AI regulation presents a series of formidable challenges, reflecting both the technical complexity of the systems in question and the broader socio-political context in which they are developed and deployed. Regulators often lack the expertise and resources required to effectively assess and monitor advanced AI systems, creating a reliance on self-regulation and industry-led standards that may not adequately reflect public interests. At the same time, the concentration of AI capabilities within a small number of corporations raises concerns regarding regulatory capture and the potential for private actors to shape governance outcomes in ways that prioritise commercial objectives over societal well-being.

These dynamics are further complicated by the global nature of AI development, which enables firms to operate across jurisdictions with varying regulatory requirements, thereby exploiting gaps and inconsistencies in governance frameworks. Addressing these challenges will require significant investment in regulatory capacity, including the development of specialised expertise, the establishment of auditing and certification mechanisms the enhancement of international cooperation in enforcement.

Ethical Foundations of Regulation

The governance of Artificial general intelligence is fundamentally underpinned by ethical considerations, reflecting the profound impact of AI systems on human lives and societal structures. Core principles such as transparency, accountability, fairness safety provide a normative foundation for regulatory efforts, yet their practical implementation remains contested. For example, the requirement for transparency may conflict with commercial interests in protecting proprietary information, while the pursuit of fairness may be complicated by competing conceptions of justice and equality.

These tensions highlight the importance of inclusive and participatory governance processes that incorporate diverse perspectives and values, ensuring that regulatory frameworks reflect the needs and aspirations of a broad range of stakeholders. Moreover, the ethical governance of Artificial general intelligence must extend beyond the mitigation of harm to encompass the promotion of positive outcomes, including the equitable distribution of benefits and the enhancement of human capabilities.

Global Coordination and Transnational Governance

The transnational nature of Artificial general intelligence necessitates the development of global governance mechanisms capable of addressing risks that transcend national boundaries, yet the realisation of such mechanisms is impeded by significant political and institutional barriers. Divergent national interests, strategic competition concerns regarding sovereignty all complicate efforts to establish binding international agreements, while existing institutions often lack the authority and capacity to enforce compliance.

Nevertheless, a range of proposals has emerged, including the establishment of international consortia, the development of shared safety standards the creation of monitoring and verification mechanisms. While these initiatives represent important steps towards global coordination, their success will depend on the willingness of states to cooperate and to cede a degree of autonomy in pursuit of collective security and benefit.

Towards a Multi-Layered Governance Model

In light of the limitations of existing approaches, this paper advocates for a multi-layered governance model that integrates national, regional global mechanisms with technical and institutional innovations. At the national level, governments must develop adaptive regulatory frameworks that balance innovation with safety, supported by investment in expertise and infrastructure. At the regional level, efforts should focus on harmonising standards and facilitating cross-border cooperation, while at the global level, the establishment of norms, agreements collaborative institutions is essential for addressing systemic risks.

Crucially, this framework must be complemented by technical governance mechanisms, including auditing, certification monitoring systems, as well as the incorporation of ethical principles into the design and development of AI systems. By integrating these elements, it is possible to create a governance architecture that is both robust and flexible, capable of responding to the evolving challenges posed by Artificial general intelligence.

Conclusion

The governance and regulation of artificial general intelligence represent a defining challenge of the contemporary era, demanding a reconfiguration of legal, institutional ethical frameworks in response to unprecedented technological capabilities. While significant progress has been made in the development of AI governance, existing approaches remain insufficient to address the unique characteristics and risks associated with Artificial general intelligence. A comprehensive and integrated approach, grounded in interdisciplinary analysis and international cooperation, is therefore essential to ensure that the development and deployment of Artificial general intelligence are aligned with human values and societal objectives. The stakes are considerable, encompassing not only economic and political outcomes but the future trajectory of human civilisation itself.

Bibliography

  • Council of Europe, Framework Convention on Artificial Intelligence (2024).
  • European Commission, The EU Model of AI Governance (2025).
  • European Union, Artificial Intelligence Act: Governance and Implementation (2024).
  • Hausenloy, J. et al., ‘Multinational Artificial general intelligence Consortium (MArtificial general intelligenceC)’ (2023).
  • IAPP, ‘Global AI Governance Law and Policy: United Kingdom’ (2025).
  • Novelli, C. et al., ‘A Robust Governance for the AI Act’ (2024).
  • Park, S., ‘Bridging the Global Divide in AI Regulation’ (2023).
  • Science, Innovation and Technology Committee, Governance of Artificial Intelligence: Interim Report (2023-24).
  • Sousa e Silva, N., ‘The Artificial Intelligence Act: Critical Overview’ (2024).
  • Taeihagh, A., ‘Governance of Artificial Intelligence’, Policy and Society, Vol. 40, No. 2 (2021).
  • UK Government, A Pro-Innovation Approach to AI Regulation (2023).
  • United Nations, AI Governance Recommendations (2024).

This website is owned and operated by X, a trading name and registered trade mark of
GENERAL INTELLIGENCE PLC, a company registered in Scotland with company number: SC003234