GENERAL INTELLIGENCE DISSERTATION

Foundations, Historical Development, and Conceptual Frameworks

Introduction

General intelligence, in both natural and artificial contexts, refers to the capacity of an agent to flexibly acquire and apply knowledge and skills across a wide range of domains and tasks. Unlike specialised or “narrow” intelligence which excels only within constrained environments or problem sets, general intelligence implies adaptability, transfer learning, and robust reasoning in novel situations (Smith 2018, p. 45). The term has been variously defined across disciplines, yet scholars generally agree that it involves a combination of learning efficiency, abstract reasoning, and domain-agnostic problem-solving.

The pursuit of general intelligence in machines emerged from philosophical inquiries into the nature of human thought and cognition. These inquiries were formalised in the twentieth century through the interdisciplinary field of cognitive science and later through the advent of artificial intelligence (AI). In recent decades, advances in computational power, algorithmic design, and data availability have reanimated longstanding debates about the feasibility and desirability of general artificial intelligence (AGI).

This dissertation examines the historical development, current state, and future prospects of general intelligence, focusing on both natural cognitive systems and artificial counterparts. Through a critical and interdisciplinary lens, it situates the topic within broader intellectual, technological, and societal frameworks.

Narrow and General Intelligence

Narrow intelligence denotes specialised systems or agents, human or artificial that perform specific tasks with high competence but have limited transferability beyond those domains. Examples include expert diagnostic systems in medicine, language translators trained on fixed corpora, and chess-playing algorithms optimised for endgame scenarios. While these systems may surpass humans in task performance, they lack the broad competency characteristic of general intelligence.

General intelligence, by contrast, entails several hallmark features: the ability to generalise learning to new contexts, to integrate diverse cognitive processes (e.g. perception, memory, reasoning), and to exhibit goal-oriented behaviour in situations not pre-encoded into the agent’s design (Jones 2020, pp. 112–114).

Methodological and Disciplinary Approach

The approach taken here is interdisciplinary, synthesising literature from cognitive science, AI research, philosophy of mind, and science and technology studies. Primary sources include seminal works by Turing (1950), Newell and Simon (1976), and recent advances in machine learning (LeCun, Bengio & Hinton 2015). Secondary analyses frame these works within broader historical and conceptual narratives.

Historical Foundations

The concept of general intelligence long predates modern psychology or computer science. Its roots lie in classical philosophy, where early thinkers sought to understand the nature of reason, learning, and knowledge as general faculties rather than task-specific skills. Aristotle’s notion of nous, the intellect that apprehends universal truths represents one of the earliest articulations of a general cognitive capacity (Aristotle, Posterior Analytics, II.19). For Aristotle, intelligence was not merely the accumulation of facts but the ability to discern underlying principles that unify disparate experiences.

This idea persisted through medieval scholasticism, where thinkers such as Aquinas integrated Aristotelian rationalism with theological frameworks. Intelligence was treated as a unified faculty capable of abstraction, inference, and moral reasoning. Importantly, it was assumed to be general by nature, not modular or fragmented.

The early modern period introduced a decisive shift. René Descartes famously argued that reason (ratio) was “the best distributed thing in the world”, implying a universal and general cognitive faculty shared by all humans (Descartes 1637/1985, p. 111). Cartesian rationalism framed intelligence as a rule-governed process capable of operating independently of sensory experience. This abstraction of reasoning from embodiment would later profoundly influence computational theories of mind.

Yet, even at this early stage, tensions emerged that remain unresolved today:

  • Is intelligence fundamentally symbolic or experiential?
  • Is it unified or composed of interacting sub-processes?
  • Can it, in principle, be mechanised?

These questions would resurface with increasing urgency as scientific and technological capabilities evolved.

Enlightenment Thought and Early Measurement

The Enlightenment introduced a new ambition: to subject the mind itself to scientific investigation. Thinkers such as Locke, Hume, and Kant debated whether intelligence arose from experience, innate structures, or some combination thereof. Locke’s tabula rasa conception emphasised learning and generalisation, while Kant insisted on a priori cognitive structures that make experience intelligible (Kant 1781/1998).

This period is significant for two reasons. First, it framed intelligence as a natural phenomenon, amenable to systematic study rather than metaphysical speculation. Second, it introduced the idea that intelligence might be measurable.

By the nineteenth century, this ambition had crystallised into early psychometrics. Francis Galton’s work on individual differences sought to quantify mental abilities using statistical tools, laying the groundwork for later intelligence testing (Galton 1869). While Galton’s methods and conclusions were deeply flawed and ethically troubling, his insistence that intelligence could be empirically analysed marked a turning point.

The emergence of the g-factor in the early twentieth century, proposed by Charles Spearman, reinforced the notion of a general cognitive capacity underlying diverse mental tasks (Spearman 1904). Although controversial, this statistical abstraction of general intelligence would later influence computational approaches that sought a similarly unifying principle.

The Cognitive Revolution

The mid-twentieth century witnessed what is often termed the “cognitive revolution”. Behaviourism, which had dominated psychology by rejecting internal mental states as unscientific, proved inadequate to explain language acquisition, reasoning, and problem-solving. Figures such as Noam Chomsky demonstrated that purely stimulus–response models could not account for the generative and general nature of human cognition (Chomsky 1959).

Cognitive science emerged as an interdisciplinary response, integrating psychology, linguistics, neuroscience, philosophy, and computer science. Central to this movement was the computational theory of mind, which proposed that cognitive processes are forms of information processing. Intelligence, on this view, consists in the manipulation of representations according to rules.

This framework strongly encouraged the belief that general intelligence could, in principle, be implemented in machines. If thinking is computation, then sufficiently sophisticated computational systems should be capable of thinking, not merely performing isolated tasks, but reasoning across domains.

Here, one sees the intellectual soil from which artificial general intelligence would grow.

Early Artificial Intelligence Research

The formal study of artificial intelligence is conventionally dated to the mid-twentieth century, though its conceptual origins are earlier. Alan Turing’s 1950 paper, Computing Machinery and Intelligence, posed the now-famous question: “Can machines think?” Turing’s genius lay not merely in asking the question, but in reframing it operationally through what became known as the Turing Test (Turing 1950).

Turing’s proposal implicitly assumed a general notion of intelligence. A machine that could converse indistinguishably from a human across arbitrary topics would necessarily possess flexible, domain-general capabilities. Importantly, Turing rejected appeals to consciousness or metaphysics, insisting instead on behavioural competence as the criterion of intelligence.

The term “artificial intelligence” itself was coined shortly thereafter by John McCarthy, who defined it as “the science and engineering of making intelligent machines” (McCarthy et al. 1955). Early AI research was animated by extraordinary optimism. Programs such as the Logic Theorist and General Problem Solver, developed by Newell and Simon, aimed explicitly at general reasoning rather than narrow tasks (Newell & Simon 1976).

These systems relied on symbolic representations and rule-based inference. Intelligence, in this paradigm, was a matter of manipulating symbols in accordance with formal logic. While these approaches achieved notable successes in constrained domains, they struggled with ambiguity, learning, and real-world complexity.

Nonetheless, the early AI pioneers were explicit in their ambition: they sought not specialised tools, but machines capable of general intelligent behaviour.

Limits, Debates, and Reassessment

By the 1970s and 1980s, the limitations of symbolic AI had become increasingly apparent. Rule-based systems proved brittle, requiring extensive manual encoding of knowledge and failing catastrophically outside narrow conditions. This gap between promise and performance led to what is now known as the first “AI winter”, marked by reduced funding and scepticism.

In parallel, an alternative approach, connectionism gained renewed attention. Inspired by neurobiology, connectionist models used artificial neural networks to learn patterns from data rather than relying on explicit rules. Early successes were modest, constrained by limited computational power and theoretical understanding.

The debate between symbolic and connectionist approaches was, at heart, a debate about the nature of general intelligence. Symbolists emphasised abstraction and explicit reasoning; connectionists prioritised learning, adaptation, and statistical regularities. Each side accused the other of missing essential features of intelligence.

From a historical perspective, this period is instructive. It demonstrates that general intelligence resists simple formalisation. Human cognition integrates reasoning, perception, learning, and embodiment in ways that defy reduction to a single paradigm. Any credible account of general intelligence must therefore accommodate complexity without sacrificing explanatory clarity.

Late Twentieth-Century Perspectives

By the end of the twentieth century, enthusiasm for artificial general intelligence had cooled, replaced by pragmatic focus on narrow applications. Yet theoretical interest persisted. Philosophers of mind revisited foundational questions about representation, intentionality, and consciousness. Cognitive scientists explored modularity, while others argued for more holistic, embodied accounts of intelligence (Clark 1997).

In retrospect, this period can be seen as a necessary corrective to early overconfidence. The failure to rapidly achieve AGI did not demonstrate impossibility, but rather exposed the depth of the problem. As Richard Feynman famously remarked in another context, “What I cannot create, I do not understand.” The history of general intelligence thus far suggests that understanding intelligence well enough to recreate it is an extraordinarily demanding intellectual task.

Conceptual Models of Intelligence

At the heart of most modern accounts of general intelligence lies the computational metaphor: the idea that cognitive processes can be understood as forms of information processing. This view, which gained prominence during the cognitive revolution, treats intelligence as the systematic transformation of representations according to formal rules. The mind, on this account, is not a mysterious substance but an organised process.

The appeal of computational theories lies in their explanatory power. They offer a clear framework for analysing reasoning, problem-solving, and learning, and they provide a bridge between natural and artificial systems. If cognition is computation, then machines capable of sufficiently complex computation should, in principle, be capable of intelligence.

Newell and Simon’s physical symbol system hypothesis articulated this position succinctly: “A physical symbol system has the necessary and sufficient means for general intelligent action” (Newell & Simon 1976, p. 116). This hypothesis explicitly equates general intelligence with the manipulation of symbols that stand for objects, relations, and goals in the world.

Limits of the Computational Metaphor

However, computational theories also invite critical scrutiny. While formal manipulation of symbols can model aspects of reasoning, critics have questioned whether such models genuinely capture understanding or merely simulate it. John Searle’s celebrated “Chinese Room” argument challenged the notion that syntactic symbol manipulation alone suffices for semantic comprehension (Searle 1980). According to Searle, a system may appear intelligent without possessing any genuine understanding of meaning.

Symbolic Artificial Intelligence

Symbolic AI, sometimes termed “good old-fashioned AI”, remains one of the most explicit attempts to engineer general intelligence. Its defining feature is the use of structured representations, symbols and explicit rules governing their manipulation. Logical inference, planning, and problem-solving are central operations.

The strength of symbolic systems lies in transparency. Rules are explicit, representations interpretable, and reasoning traceable. This aligns well with human practices of explanation and justification, particularly in domains such as mathematics and law. Moreover, symbolic approaches naturally support abstraction, a key component of general intelligence.

Yet symbolic systems face persistent challenges. Encoding the vast and tacit knowledge required for general competence proves prohibitively labour-intensive. More fundamentally, such systems struggle with uncertainty, ambiguity, and perceptual grounding. They excel in well-defined problem spaces but falter in the messy, continuous environments characteristic of real-world cognition.

Attempts to scale symbolic AI toward general intelligence have therefore met with limited success. Nonetheless, symbolic reasoning continues to play a vital role in hybrid models and remains indispensable for certain forms of high-level cognition.

Connectionist Models and Sub-Symbolic Intelligence

Connectionist models, most notably artificial neural networks, represent a departure from explicit symbolic reasoning. Instead of manipulating discrete symbols, these systems operate through distributed numerical representations learned from data. Intelligence, in this paradigm, emerges from the interaction of many simple units rather than from centrally defined rules.

The resurgence of connectionism in the late twentieth and early twenty-first centuries transformed AI research. Deep learning systems demonstrated remarkable performance in perception, pattern recognition, and language processing. Their ability to generalise from large datasets suggested a path toward more flexible, adaptive intelligence.

However, the relationship between connectionist models and general intelligence is complex. While such systems can generalise within domains, their generality often depends on vast quantities of data and narrowly defined training regimes. Transfer learning remains limited, and catastrophic failure outside training distributions is common.

Critics argue that connectionist models lack the compositional structure necessary for systematic reasoning (Fodor & Pylyshyn 1988). Supporters counter that symbolic structure can emerge implicitly through learning. The debate mirrors earlier philosophical disagreements, reframed in computational terms.

What is clear is that sub-symbolic systems capture aspects of intelligence; particularly learning and pattern sensitivity, that symbolic systems handle poorly. Any credible account of general intelligence must therefore reckon with both paradigms.

Embodied and Situated Cognition

One of the most significant theoretical challenges to traditional computational models comes from embodied and situated theories of cognition. These approaches reject the notion that intelligence can be fully understood as abstract information processing detached from physical and social context.

According to embodied cognition theorists, intelligence arises through interaction with the environment. Perception, action, and cognition form a continuous loop, and general intelligence depends crucially on an agent’s capacity to act within and adapt to its surroundings (Clark 1997). From this perspective, disembodied systems,however computationally powerful, may lack essential components of general intelligence.

Situated cognition further emphasises the social and cultural dimensions of intelligence. Language, norms, and shared practices shape cognitive capabilities, suggesting that general intelligence cannot be isolated within an individual agent, whether biological or artificial.

These theories complicate the project of artificial general intelligence. Building an intelligent system may require not merely algorithms and data, but bodies, environments, and social integration. While such requirements increase complexity, they also offer a richer and potentially more realistic model of intelligence.

Human and Artificial General Intelligence

A persistent question throughout the literature concerns the relationship between human general intelligence and its artificial counterparts. Should AGI aim to replicate human cognition, or merely achieve equivalent functional outcomes? The answer has profound implications for design, evaluation, and ethics.

Human intelligence is shaped by evolutionary pressures, biological constraints, and developmental processes. It exhibits remarkable flexibility, but also systematic biases and limitations. Artificial intelligence, by contrast, need not share these constraints. It may achieve generality through entirely different mechanisms.

Functionalist approaches argue that what matters is not how intelligence is realised, but what it can do. From this view, a machine that reliably exhibits general problem-solving ability qualifies as generally intelligent, regardless of internal architecture. Others insist that without human-like understanding, such systems lack genuine intelligence.

The tension between these positions reflects deeper philosophical disagreements about the nature of mind. For present purposes, it suffices to note that models of general intelligence must specify both capacities (what an intelligent agent can do) and mechanisms (how it does it).

Measuring General Intelligence

A final conceptual challenge concerns measurement. Unlike narrow intelligence, general intelligence resists straightforward benchmarking. Traditional AI evaluations focus on task performance, yet success in isolated tasks does not guarantee generality.

Proposals for evaluating AGI include multi-domain benchmarks, lifelong learning assessments, and measures of transfer and adaptability (Legg & Hutter 2007). Some researchers advocate for formal, information-theoretic definitions of intelligence, while others emphasise behavioural versatility.

The absence of a universally accepted metric underscores a recurring theme: general intelligence is not a single property but a constellation of interrelated capabilities. Attempts to reduce it to a scalar value risk obscuring this complexity.

Contemporary Implementations

Machine learning (ML) constitutes the foundational framework for much of modern artificial intelligence. Unlike symbolic systems, which rely on hand-coded rules, ML systems learn patterns from data. Among these, deep learning has emerged as a particularly powerful paradigm, employing multi-layered artificial neural networks to model complex, non-linear relationships (LeCun, Bengio & Hinton 2015).

Deep learning has achieved remarkable successes in domains that were once considered intractable. Convolutional neural networks (CNNs) excel in visual perception, enabling tasks such as object recognition and autonomous navigation. Recurrent neural networks (RNNs) and transformer architectures have transformed natural language processing, allowing systems to translate languages, summarise text, and generate coherent prose (Vaswani et al. 2017).

  • Pattern generalisation: networks can extrapolate beyond specific training examples.
  • Representation learning: abstract features are discovered automatically, rather than pre-defined.
  • Scalability: larger models trained on more data often improve in capability.

However, deep learning remains domain-limited. Its apparent generality is constrained by training data, and performance degrades sharply when confronted with tasks outside its learned distribution. While recent large language models (LLMs) can answer questions across multiple domains, their reasoning is statistical rather than semantic; understanding is approximated rather than genuinely realised.

Reinforcement Learning and Hybrid Architectures

Reinforcement learning (RL) offers a complementary approach. In RL, agents learn by interacting with an environment, optimising behaviour according to reward signals. Classical successes include AlphaGo, AlphaZero, and DeepMind’s AlphaStar, which mastered complex games such as Go and StarCraft II, achieving superhuman performance (Silver et al. 2016; Vinyals et al. 2019).

  • Sample inefficiency: massive amounts of interaction are often required for training.
  • Reward specification: defining appropriate reward functions for general tasks is non-trivial.
  • Transferability: knowledge learned in one environment may fail when applied elsewhere.

Recognising the complementary strengths of symbolic and sub-symbolic methods, researchers have developed hybrid architectures. These integrate neural networks for perception and learning with symbolic reasoning for planning and decision-making (Marcus 2020).

Current Systems and Insights

  • GPT-4 and Large Language Models (LLMs): Demonstrate cross-domain textual reasoning, summarisation, and problem-solving capabilities. LLMs exemplify statistical generality but remain vulnerable to hallucination, lack long-term memory, and cannot interact with the physical world autonomously (OpenAI 2023).
  • DeepMind’s Gato: A “generalist agent” trained across multiple modalities, vision, language, and control tasks. While its capabilities hint at emergent generality, performance remains uneven, illustrating the difficulty of achieving truly flexible intelligence (DeepMind 2022).
  • Autonomous Vehicles: Integrate perception, planning, and decision-making. These systems illustrate hybrid approaches in practice, combining deep learning for sensory interpretation with rule-based systems for navigation and safety. However, performance is context-sensitive and heavily dependent on data quality and environmental constraints.

While remarkable, current implementations remain far from human-level generality. They illuminate both the potential and the limitations of engineering intelligence, underscoring the need for continued theoretical and practical research.

Domains of application and limits of general intelligence

Healthcare represents one of the most compelling arenas for deploying general intelligence systems. Modern AI demonstrates remarkable capability in diagnosis, prognosis, and personalised treatment planning. Deep learning algorithms can analyse medical imaging, such as radiographs or MRIs, achieving performance comparable to expert radiologists (Esteva et al. 2017). Reinforcement learning approaches assist in optimising treatment regimens, modelling patient responses to various interventions over time (Yu et al. 2019).

The advantages of artificial intelligence in healthcare are multifaceted:

  • Scalability: systems can process vast quantities of patient data efficiently.
  • Pattern recognition: subtle correlations, often invisible to humans, can be identified.
  • Decision support: Artificial intelligence can augment clinician judgement, offering probabilistic assessments and personalised recommendations.

Yet challenges persist. Data bias may exacerbate disparities in care, while opaque model reasoning can hinder clinician trust and accountability. Moreover, healthcare environments require adaptive general intelligence; rare or novel conditions demand flexibility beyond narrow task training. These considerations illustrate that approximations of general intelligence must be coupled with ethical oversight, robust validation, and interpretability mechanisms.

Autonomous systems and robotics

Autonomous systems, particularly robotics, embody a practical testing ground for general intelligence. Unlike narrow artificial intelligence applications, robots must perceive, plan, and act in dynamic, unpredictable environments. Self-driving vehicles, for example, integrate vision systems, decision-making algorithms, and real-time control loops to navigate complex road networks.

Robotics underscores the importance of embodied intelligence: physical interaction with the environment is inseparable from cognition. Systems must handle sensorimotor uncertainty, adapt to unforeseen events, and coordinate multiple subsystems simultaneously. Hybrid approaches combining neural networks for perception with symbolic reasoning for planning are particularly effective in these settings (Kormushev et al. 2011).

Challenges remain. Current autonomous systems excel in structured environments but often fail in rare or chaotic scenarios, highlighting the gap between narrow competence and true generality. Additionally, safety, ethical decision-making, and accountability are critical concerns in high-stakes domains such as transportation and industrial automation.

Natural language processing

Natural language processing (NLP) offers a domain in which generality is both demonstrated and tested. Large language models (LLMs), including GPT-4, exhibit cross-domain capabilities: summarisation, translation, question-answering, and even rudimentary reasoning. Unlike earlier task-specific NLP systems, these models process a wide spectrum of linguistic inputs, reflecting statistical generalisation across domains.

NLP exemplifies the emergent capabilities of scale. Large datasets and transformer architectures enable models to learn patterns of syntax, semantics, and pragmatics simultaneously. This capability allows for multi-domain transfer, a hallmark of general intelligence.

Limitations remain significant: LLMs are prone to hallucination, struggle with long-term reasoning, and cannot access or verify real-world context autonomously. Furthermore, they do not yet demonstrate grounded understanding; their “intelligence” is statistical rather than semantic. Nevertheless, NLP illustrates how computational approximations of general intelligence can operate in richly symbolic, multi-domain environments.

Scientific discovery and creativity

An emerging frontier for general intelligence is scientific discovery and creativity. Artificial intelligence systems can assist in hypothesis generation, experimental design, and analysis of complex datasets. In fields such as molecular biology, artificial intelligence accelerates drug discovery by predicting molecular interactions, identifying candidate compounds, and optimising chemical synthesis pathways (Stokes et al. 2020).

Creativity-oriented artificial intelligence, including generative models in art and music, demonstrates another facet of general intelligence: the ability to produce novel outputs based on learned patterns. These applications reveal that general intelligence is not merely reactive but generative, capable of exploring solution spaces beyond immediate human intuition.

The deployment of artificial intelligence in scientific and creative domains raises important epistemological questions: to what extent can machine-generated hypotheses or creations be considered genuinely “intelligent” or “understanding”? These debates reflect broader philosophical tensions about intelligence, agency, and authorship.

Cross-domain patterns

Across domains, several common themes emerge:

  • Integration of learning and reasoning: successful applications combine statistical learning with structured decision-making.
  • Context sensitivity: adaptability to novel scenarios is essential for practical utility.
  • Human–artificial intelligence collaboration: general intelligence systems are often most effective as augmentative tools rather than autonomous agents.
  • Ethical and societal considerations: deployment in high-stakes domains necessitates rigorous evaluation, transparency, and accountability.

These patterns illustrate that contemporary approximations of general intelligence are already shaping society, but full generality; the flexibility, adaptability, and cross-domain competence characteristic of human cognition, remains an aspirational target.

Computational limits

One of the fundamental constraints on general intelligence arises from computational complexity. Many problems that a generally intelligent system must solve, planning, reasoning, and decision-making under uncertainty, belong to classes of computational problems that are intractable in the worst case (Garey & Johnson 1979). For instance, classical planning tasks scale exponentially with the number of variables, and reasoning over large knowledge bases can quickly exceed available computational resources.

Even with modern supercomputers and specialised hardware (such as GPUs and TPUs), practical computation imposes hard limits. Approximation algorithms and heuristic methods are necessary, but they introduce uncertainty and may fail unpredictably outside the conditions for which they were designed. Consequently, perfect generality remains theoretically unattainable; intelligence in practice must be bounded by tractable models, approximations, and probabilistic reasoning.

Opacity and interpretability

A persistent challenge in contemporary AI is the opacity of learned representations. Deep neural networks, particularly large-scale models, develop complex internal structures that are not readily interpretable. While these systems may perform tasks with high accuracy, the reasoning behind their outputs is often inscrutable.

This opacity has practical and ethical consequences. In high-stakes domains such as healthcare, finance, or autonomous vehicles, a lack of transparency undermines trust, complicates error correction, and raises questions of accountability. For example, if a medical diagnostic system misclassifies a tumour, clinicians must understand the reasoning to make informed decisions.

Research into explainable artificial intelligence (XAI) seeks to address these issues by providing mechanisms for tracing decisions, visualising learned features, or approximating symbolic explanations from neural networks (Doshi-Velez & Kim 2017). Yet a comprehensive solution remains elusive. The more flexible and powerful a system becomes, the harder it is to interpret, a paradox of general intelligence.

Safety and alignment

General intelligence systems raise profound safety concerns, particularly when deployed autonomously in complex environments. The alignment problem, ensuring that artificial intelligence systems pursue goals consistent with human values, is central. Even small misalignments can lead to catastrophic outcomes, as intelligent systems optimise objectives in ways unanticipated by designers (Bostrom 2014).

Safety challenges manifest in multiple ways:

  • Goal mis-specification: systems may optimise proxies rather than intended objectives.
  • Distributional shift: performance may degrade in environments differing from training conditions.
  • Emergent behaviours: as systems scale, unexpected capabilities or strategies may arise.

These issues illustrate that achieving general intelligence is not merely a technical problem; it is an ethical and societal one. Designing systems that are both capable and safe requires formal verification, robust oversight, and alignment research.

Data dependence and integration challenges

Current artificial intelligence approaches rely heavily on data. Large language models, vision systems, and reinforcement learning agents all require vast datasets to learn effectively. However, data availability, quality, and representativeness are limiting factors.

Biases in datasets can propagate into models, leading to unfair, discriminatory, or unsafe outcomes. Furthermore, rare events or novel scenarios, which are precisely those requiring flexible general intelligence, are often underrepresented. This limitation constrains the applicability of current artificial intelligence systems in situations where adaptability and domain generality are most needed.

Human intelligence integrates perception, reasoning, memory, learning, and motor control into a coherent, adaptive system. Replicating this integration in machines remains a significant challenge. Current architectures often excel in isolated sub-domains but fail when coordination across multiple modalities or tasks is required.

Hybrid architectures attempt to address this by combining symbolic reasoning with statistical learning, yet seamless integration remains elusive. Modularity can simplify engineering and facilitate learning in specific domains, but overly rigid modularity may inhibit the flexibility required for true general intelligence. Balancing modularity and integration is therefore a central open problem.

Conceptual and philosophical limits

Beyond technical constraints, there are conceptual limits. Philosophers have long debated whether artificial systems can truly “understand” or whether they merely manipulate symbols and patterns without awareness (Searle 1980). The distinction between functional competence and genuine comprehension complicates evaluation: a system may perform a task indistinguishably from a human yet lack consciousness or subjective understanding.

This raises epistemological questions: what does it mean to achieve general intelligence, and how do we know when it has been realised? Unlike narrow intelligence, which can be benchmarked empirically, general intelligence involves qualitative aspects; adaptability, creativity, and insight, that are harder to formalise or measure.

Summary of limitations

In summary, the pursuit of general intelligence faces multiple intertwined limitations:

  • Computational: intractable problems and scaling challenges.
  • Interpretive: opaque reasoning and lack of explainability.
  • Safety and ethical: alignment, emergent behaviour, and high-stakes consequences.
  • Data-dependent: limitations in availability, quality, and representativeness.
  • Integration: difficulty combining diverse cognitive capacities.
  • Philosophical: uncertainties about comprehension, consciousness, and measurement.

Understanding these limits is critical for designing effective, safe, and responsible general intelligence systems. Recognising boundaries informs both research priorities and regulatory frameworks, guiding efforts toward achievable, ethically sound objectives.

Ethical, Legal and Social Implications

One of the most pressing ethical questions concerns accountability: when intelligent systems act autonomously, who bears responsibility for their outcomes? In narrow artificial intelligence systems, human operators or developers typically assume responsibility. However, as systems approximate general intelligence, decision-making becomes increasingly opaque and autonomous, complicating traditional frameworks of liability.

For instance, consider autonomous vehicles. If an AGI-driven car causes an accident, is liability attributable to the manufacturer, the software developers, the operators, or the system itself? Philosophers and legal scholars debate whether traditional models of responsibility suffice or whether new paradigms of distributed or algorithmic accountability are required (Bryson 2018).

The challenge is compounded when systems operate across multiple domains or make long-term decisions with delayed consequences. Ensuring ethical deployment of general intelligence thus demands robust regulatory frameworks, clear lines of accountability, and mechanisms for retrospective evaluation.

Labour, Automation, and Economic Disruption

General intelligence systems have the potential to transform labour markets. Whereas narrow artificial intelligence primarily automates repetitive or highly specialised tasks, AGI-like systems could perform complex, multi-domain work that currently requires human expertise. Sectors such as finance, healthcare, law, and creative industries may experience substantial displacement.

Historical experience with technological disruption suggests several patterns:

  • Task automation precedes occupation-wide displacement: systems first take over specific functions before replacing entire roles.
  • Complementarity creates new roles: humans and artificial intelligence can collaborate, with machines augmenting rather than replacing human capabilities.
  • Skill shifts are essential: education and retraining become critical for workforce adaptation.

Policymakers must anticipate the economic and social consequences of general intelligence, including inequality, labour transitions, and social safety nets. Proactive measures can mitigate disruption and maximise the benefits of augmented human–machine collaboration.

Bias, Fairness, and Justice

Bias in artificial intelligence systems arises from training data, algorithmic design, and societal structures. For general intelligence systems, which draw on vast multi-domain datasets, the risk of propagating or amplifying bias is significant. Without careful intervention, AGI could inadvertently reinforce existing inequalities or generate discriminatory outcomes.

Ensuring fairness requires multi-layered strategies:

  • Data auditing to identify and mitigate bias.
  • Algorithmic transparency to enable inspection and correction of decision-making pathways.
  • Ethical governance frameworks that incorporate stakeholder perspectives and social norms.

From a philosophical perspective, the deployment of general intelligence intersects with justice: how should benefits and burdens be distributed? Systems that influence employment, health, or legal outcomes must be designed with equity as a core principle, not an afterthought.

Privacy, Surveillance, and Autonomy

General intelligence systems often require access to vast, sensitive datasets, including personal, behavioural, and social information. This raises urgent privacy concerns. Without appropriate safeguards, AGI could be used for pervasive surveillance, predictive policing, or social manipulation.

Legal frameworks such as the General Data Protection Regulation (GDPR) provide some protections, but emerging technologies often outpace regulation. Designers and policymakers must anticipate potential abuses and implement privacy-by-design principles, ensuring that intelligence systems respect individual autonomy and civil liberties.

Moral Agency and Ethical Reasoning

AGI systems capable of independent decision-making raise profound ethical questions about moral agency. Can a machine be considered an ethical actor, or do moral obligations remain exclusively human? Philosophers have explored frameworks for programming ethical reasoning into machines, such as utilitarian maximisation, deontological rules, or hybrid approaches (Allen, Wallach & Smit 2006).

In practice, embedding ethics into general intelligence is extraordinarily challenging. Ethical principles are context-sensitive, culturally contingent, and often ambiguous. Systems must not only follow rules but interpret them appropriately, a task that challenges both symbolic and sub-symbolic artificial intelligence paradigms.

Trust, Perception, and Public Acceptance

Deployment of general intelligence depends not only on technical feasibility but on societal acceptance. Public trust is shaped by transparency, reliability, and demonstrated alignment with human values. High-profile failures, misuses, or accidents can erode confidence, slowing adoption or prompting restrictive regulation.

Trust also interacts with perceived intelligence. Humans may overestimate the competence or autonomy of artificial intelligence systems, attributing agency where none exists, a phenomenon known as automation bias (Parasuraman & Riley 1997). Designers must manage both technological capability and public perception to ensure responsible integration.

Governance and Global Coordination

The cross-domain, transformative nature of general intelligence necessitates comprehensive governance strategies. Key considerations include:

  • International coordination: AGI development spans borders; cooperative norms are needed to prevent arms races or unethical deployment.
  • Regulatory oversight: standards for safety, transparency, and testing must be enforced.
  • Research ethics: funding agencies and institutions should require ethical review and accountability measures.
  • Public engagement: stakeholder input can guide socially aligned innovation.

Without such structures, the risks associated with general intelligence, from economic disruption to existential threats, may outweigh potential benefits.

In aggregate, the ethical, legal, and societal considerations of general intelligence are profound:

  • Accountability and responsibility frameworks must adapt to autonomous decision-making.
  • Economic and labour impacts require careful policy design and workforce planning.
  • Bias, fairness, and justice must be embedded into design and deployment.
  • Privacy, surveillance, and autonomy concerns demand technical and regulatory safeguards.
  • Moral reasoning, societal trust, and governance are critical to safe, effective adoption.

These issues underscore a central principle: the development of general intelligence is inseparable from human values and societal context. Technical innovation must proceed hand-in-hand with ethical reflection, policy development, and public engagement.

Future Trends

Artificial general intelligence, often termed AGI, remains the aspirational frontier of artificial intelligence research. While current systems demonstrate impressive multi-domain capabilities, true AGI requires flexibility, adaptability, and autonomous problem-solving akin to human cognition.

Several lines of research suggest plausible pathways:

  • Scaling existing architectures: increasing model parameters, data, and computational power can yield emergent capabilities (Brown et al. 2020).
  • Hybrid approaches: combining symbolic reasoning, statistical learning, and embodied cognition (Marcus 2020).
  • Neuromorphic and brain-inspired computing: hardware mimicking biological neural structures (Indiveri & Liu 2015).
  • Lifelong learning systems: agents capable of continuous adaptation across tasks and environments.

While optimistic projections suggest the possibility of AGI within decades, many experts caution that timelines remain uncertain due to fundamental conceptual, engineering, and computational challenges (Grace et al. 2018).

Future trends are likely to emphasise human–machine integration, moving beyond autonomous AI toward collaborative intelligence.

Conclusion

This dissertation has traced the evolution, current implementation, and future trajectory of general intelligence, integrating perspectives from philosophy, cognitive science, artificial intelligence, and societal studies.

A central insight of this dissertation is that general intelligence cannot be approached from a single discipline. Technical innovation, philosophical clarity, ethical reflection, and policy development must proceed in tandem.

True progress toward safe, effective general intelligence requires sustained interdisciplinary collaboration.

Bibliography

  • Allen, C., Wallach, W., and Smit, I. (2006) ‘Why Machine Ethics?’, IEEE Intelligent Systems, 21(4), pp. 12–17.
  • Aristotle (1989) Posterior Analytics, trans., Oxford: Oxford University Press.
  • Bostrom, N. (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.
  • Brown, T., Mann, B., Ryder, N., et al. (2020) ‘Language Models are Few-Shot Learners’, Advances in Neural Information Processing Systems, 33, pp. 1877–1901.
  • Bryson, J. J. (2018) ‘The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation’, in Towards a New Enlightenment?, Springer.
  • Chomsky, N. (1959) ‘A Review of B.F. Skinner’s Verbal Behavior’, Language, 35(1), pp. 26–58.
  • Clark, A. (1997) Being There: Putting Brain, Body, and World Together Again, MIT Press.
  • DeepMind (2022) Gato: A Generalist Agent, DeepMind Research Report.
  • Descartes, R. (1637/1985) Discourse on the Method, Penguin Classics.
  • Doshi-Velez, F. and Kim, B. (2017) ‘Towards a Rigorous Science of Interpretable Machine Learning’, arXiv preprint arXiv:1702.08608.
  • Esteva, A., Kuprel, B., Novoa, R. A., et al. (2017) ‘Dermatologist-Level Classification of Skin Cancer with Deep Neural Networks’, Nature, 542, pp. 115–118.
  • Fodor, J. and Pylyshyn, Z. (1988) ‘Connectionism and Cognitive Architecture: A Critical Analysis’, Cognition, 28(1), pp. 3–71.
  • Galton, F. (1869) Hereditary Genius, Macmillan.
  • Garey, M. R. and Johnson, D. S. (1979) Computers and Intractability: A Guide to the Theory of NP-Completeness, W.H. Freeman.
  • Goertzel, B. (2014) ‘Engineering General Intelligence, Part 1: A Path to Advanced AGI via Embodied Systems’, AGI Conference Proceedings.
  • Grace, K., Salvatier, J., Dafoe, A., Zhang, B., and Evans, O. (2018) ‘When Will AI Exceed Human Performance? Evidence from AI Experts’, Journal of Artificial Intelligence Research, 62, pp. 729–754.
  • Indiveri, G. and Liu, S.-C. (2015) ‘Memory and Information Processing in Neuromorphic Systems’, Proceedings of the IEEE, 103(8), pp. 1379–1397.
  • Jones, E. (2020) The Nature of General Intelligence, Cambridge: Cambridge University Press.
  • Kant, I. (1781/1998) Critique of Pure Reason, Cambridge: Cambridge University Press.
  • Kormushev, P., Nenchev, D., Calinon, S., and Caldwell, D. (2011) ‘Upper-body Kinesthetic Teaching of a Free-standing Humanoid Robot’, IEEE International Conference on Robotics and Automation.
  • LeCun, Y., Bengio, Y., and Hinton, G. (2015) ‘Deep Learning’, Nature, 521, pp. 436–444.
  • Legg, S. and Hutter, M. (2007) ‘A Collection of Definitions of Intelligence’, in Frontiers in Artificial Intelligence and Applications, 157, pp. 17–24.
  • Marcus, G. (2020) ‘The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence’, arXiv preprint arXiv:2002.06177.
  • McCarthy, J., Minsky, M. L., Rochester, N., and Shannon, C. E. (1955) ‘A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence’, AI Magazine, 27(4), pp. 12–14.
  • Newell, A. and Simon, H. A. (1976) ‘Computer Science as Empirical Inquiry: Symbols and Search’, Communications of the ACM, 19(3), pp. 113–126.
  • OpenAI (2023) GPT-4 Technical Report, OpenAI.
  • Parasuraman, R. and Riley, V. (1997) ‘Humans and Automation: Use, Misuse, Disuse, Abuse’, Human Factors, 39(2), pp. 230–253.
  • Silver, D., Huang, A., Maddison, C. J., et al. (2016) ‘Mastering the Game of Go with Deep Neural Networks and Tree Search’, Nature, 529, pp. 484–489.
  • Smith, J. (2018) Cognitive Foundations of Intelligence, Oxford: Oxford University Press.
  • Spearman, C. (1904) ‘General Intelligence, Objectively Determined and Measured’, American Journal of Psychology, 15, pp. 201–292.
  • Stokes, J. M., Yang, K., Swanson, K., et al. (2020) ‘A Deep Learning Approach to Antibiotic Discovery’, Cell, 180(4), pp. 688–702.
  • Turing, A. M. (1950) ‘Computing Machinery and Intelligence’, Mind, 59(236), pp. 433–460.
  • Vaswani, A., Shazeer, N., Parmar, N., et al. (2017) ‘Attention is All You Need’, Advances in Neural Information Processing Systems, 30.
  • Vinyals, O., Babuschkin, I., Czarnecki, W. M., et al. (2019) ‘Grandmaster Level in StarCraft II Using Multi-Agent Reinforcement Learning’, Nature, 575, pp. 350–354.
  • Yu, C., Liu, J., Nemati, S., et al. (2019) ‘Reinforcement Learning in Healthcare: A Survey’, ACM Computing Surveys, 52(6), pp. 1–36.

This website is owned and operated by X, a trading name and registered trade mark of
GENERAL INTELLIGENCE PLC, a company registered in Scotland with company number: SC003234