XII. Artificial Intelligence and Machine Technology
Uncertainty remains with regard to the possible limits and effects of machine intelligence and artificial intelligence. Machine intelligence can be used to refer to programmable, or trained, forms of intelligence using computers and algorithms. Difficulties arise in determining the limits of machine intelligence and artificial intelligence, the nature of machine sentience and whether machines can ever achieve consciousness, the possible need for machine ethics or equivalent programmes or protocols, the inevitability of creating self-reinforcing Artificial Super Intelligence (ASI) and the merging of human and machine intelligence. These issues can be considered further in terms of machine intelligence, machine sentience, consciousness, consciousness theories, intelligence and cognition and the forthcoming machine Singularity.
A. AI & Machine Intelligence
Machine intelligence can be used to refer to machine reading and machine learning and to where programmes develop a general cognitive ability and approach Artificial General Intelligence (AGI). Alan Turing proposed an assessment test to determine whether a machine could approximate general intelligent behaviour in 1950 with an evaluator assessing the difference between a human and a machine expressing ideas through natural language understanding (NLU) or natural language interpretation (NLI). Turing created an “Imitation Game” with the interrogator determine whether a machine can act in an indistinct manner from a human rather than think as a human. It was accepted at the Dartmouth Summer Research Project on Artificial Intelligence 1956 that a machine could simulate intelligence. The French philosopher, René Descartes (1596–1650), had earlier distinguished automata, or self-operating machines, from humans in terms of their ability to construct three linguistic responses.
American researchers, Allen Newell and Herbert Simon claimed that “symbol manipulation” was the essence of human and machine intelligence in 1963. The physical symbol system hypothesis (PSSH) stated that a physical symbol system was the necessary and sufficient means for general intelligent action. A mind may nevertheless process higher level more complex symbols rather than simple representations of physical matters. The American philosopher, Hubert Dreyfus, accepted that a physical device could reproduce the behaviour of the nervous system. American philosopher, John Searle, acknowledged that a man made machine could think. Dreyfus also claimed that human intelligence was dependent on unconscious instincts beyond intentional symbolic manipulation.
Machine learning can duplicate human intelligence in terms of results although that is distinct from replicating its operations. It is arguable that machine learning may never replicate free thought and free will with open choice, instinct, emotional reaction and preference, and the essential unpredictability of human thought and the human condition. For the purposes of this paper, intelligence has been defined as the ability to carry out one or more neural functions, or processes, on a programmed, directed, or autonomous basis. Identity would be a condition of self-awareness. Awareness can be understood as a form of cognition or recognition of ability to carry out one or more neural or processing functions. Understanding is defined as the appreciation of meaning, sense and intention with attached meaning. Knowledge is defined as understanding, appreciation and awareness, including physical and emotional (chemical) experience and symbolic language recognition.
Artificial Intelligence can then be defined as any separate, or independent, data or neural analysis, processing or decision taking system that operates on a programmed, directed or autonomous basis without human control or direction. This may be mechanical, as with a mechanical computer, or electronic or digital, and vary depending upon the degree of autonomy or independence provided for. Autonomous Intelligence (ATI) would involve a separate, or independent, data or neural analysis, processing or decision taking system that operates on a fully autonomous basis without human control or direction. This is often implied in many uses or references to artificial intelligence. The term Artificial Agent or Autonomous Agent (AAA) can be used to distinguish this from more general AI.
B. AI & Machine Sentience
Machines will be able to achieve multiple layers, or degrees, of machine sentience. Searle distinguished between “weak AI,” in which a physical symbol system can act intelligently, and “strong AI,” with a physical system having a mind and mental states. This is still distinct from consciousness. The “hard problem” of consciousness refers to explaining how organisms experience phenomena, or qualia, such as pain or pleasure. Earlier philosophers considered whether the mind and body were separate (dualism) or everything constituted the same matter (monism or materialism). Computationalism, or the computational theory of the mind, considers the brain and thinking as a form of program computation.
Brain operations are examined in neuroscience and neurobiology. This includes the study of perception and stimulation, mental and neural processors, the different functions carried out by specific anatomical sections of the brain and the development of the human nervous system. This attempts to explain the physical operations of the mind, experience, and understanding, and generally assumes a form of physicalism or materialism.
It is arguable that while machines may arrive at a high level of machine intelligence and sentience, including with some degree of pre-programmed self-awareness, they may never be able to achieve the equivalent of biological consciousness in a human form although the validity of this may simply be dependent upon how the various terms are defined.
C. Biological Consciousness
Further difficulties then arise with regard to the meaning of consciousness more generally and with regard to machine intelligence and sentience more specifically. Conscious and consciousness are complex, combination, composite and polymorphic, or polysemous terms. They are also contestable to the extent that they may involve different value senses. The etymology of the term conscious is concerned with common knowledge or knowing together. Aristotle (384–322 BC) considered consciousness in terms of perceptual awareness. Thomas Hobbes referred in Leviathan to knowledge of the same facts. John Locke (1632–1704) discussed consciousness in terms of perception. The English essayist, Samuel Johnson (1709–1784). considered this in terms of internal feeling.
Consciousness is often examined in terms of being a mental as opposed to a physical process. The distinction is discussed in terms of the mental physical (or mind body) problem as noted previously. The French philosopher, Descartes, discussed this in terms of dualism (referred to as Cartesian dualism) with distinct realms of thought (res cogitans) and the physical or material realm (res extensa). Dualism includes substance dualism, which examines the mind or thought as a distinct substance, and property dualism, which examines substance in terms of physical and mental properties. Dualism is contrasted with monism which includes physicalism (matter includes the mind), idealism (with reality being a mental construct), and neutral monism (with reality being neither mental nor physical but a combination). How individual experiences, or phenomena (qualia), generate the relevant sensory input is, as noted, referred to the hard problem of consciousness.
Whether a machine can replicate human intelligence can be considered in terms of machine or artifact intelligence or consciousness. The English polymath Charles Babbage (1791–1871) designed a universal Analytical Engine in 1837 following his automatic mechanical calculator, the Difference Engine, produced in the 1820s. Machines were later described as being computationally complete or “Turing complete” following Turing’s mathematical model of a universal computation machine (the Turing machine). Machines are Turing complete when they can carry out any computational function and Turing equivalent where one machine can simulate the other. The Turing test (or Imitation Game) is used to determine whether the machines can think by assessing their ability to convince an independent observer that they can replicate the quality human responses through the production of machine responses. Alan Turing responded to all of the more traditional objections raised to machines developing of full artificial intelligence.
D. Consciousness Theories
The nature of consciousness has been examined in terms of a number of separate theories These can be classified in different ways which reflects the complex, combination, composite, and polymorphic or polysemous nature of the terms concerned. These can essentially be divided into dualist, or combination, and material, or physicalist, schools of thought which reflect the underlying mind body problem. A number of different sets of ideas can be identified although many of these have overlapping features.
Dualism reflects Descartes’ distinction between res cogitans and res extensa with two separate realms of physical and mental existence. Specific elements of this include, as noted, substance dualism (Cartesian dualism), property dualism (with dual aspect physical and phenomenal instantiation attribution), including fundamental property dualism and emergent property dualism, neutral monist property dualism (with a larger single unitary reality), and panpsychism (with everything capable of being conscious).
Material and physicalist theories are essentially monist (non-dualist) and attempt to account for consciousness without reference to non-physical elements. These include eliminative (or eliminated materialism) and denial theories or identity (including type-type identity) theories that equate experiences with reality. Other materialist schools explain consciousness in terms of physical relations, which include functional or reductive physical theories. Reference is also made to emergentism with new properties in a system emerging out of the relationship or interaction between other properties.
More specific explanations of consciousness can be classified in terms of neural (including neural correlate or frequency) theories, representational (mental association) theories (including First-order Representational (FOR), Higher Order Representational (HOR), and Hybrid States), cognitive (including Multiple Drafts Model (MDM) and Global Workspace Theory (GWT)) theories, and quantum theories. These are all reductionist to the extent that they explain consciousness in terms of other more specific properties or components. These formulations also include Information Integration Theory (IIT), Reflexive (self-awareness) higher order theories, and other cognitive theory (such as Attended Intermediate Representation (AIR)), in addition to narrative interpretive theories (the Multiple Drafts Model (MDM) above) and neural, representational, and quantum theories as referred to.
This remains a complex area of debate. Many elements or parts of these different approaches may be necessary to construct an overall composite final solution. While the detailed philosophical examination of the issues concerned remains of interest, some of the most interesting insights may arise in the areas of empirical neuroscience and neural biological study. This has produced invaluable new perspectives on such complex issues as alertness, awareness, memory, attention (and decision-taking), cognition, and language. Nevertheless, it is unnecessary to produce a final determinative position on all these issues in order to consider the relationship between biological consciousness and artificial intelligence in further detail.
E. Intelligence or Cognition Wall
Intelligence is a complex phenomenon which has been examined from a number of inter-disciplinary, perspectives, including biology, chemistry, neuroscience, psychology, and psychiatry. Many aspects of brain function have still not been fully determined and resolved, including with the nature of intelligence and consciousness. It is nevertheless possible to identify a number of core functions, or operations, within these processes. For the purposes of this paper, an “Intelligence or Cognition Wall” can be constructed to identify the principal core elements involved and to allow biological and machine functions to be contrasted. It is then possible with this to compare biological consciousness and machine sentience in more detail to confirm the specific areas in which machine performance may be more effective than humans but also assess the inherent limitations within a machine intelligent architecture.
Intelligence can then be considered in terms of a number of separate functions which may be carried out on an artificial and mechanical or biological basis. These would include the following:
(1) The carrying out of essential motor operations within the body, such as managing heart rate, blood circulation, breathing, digestion and temperature control, with the brain historically emerging principally as a resource management device through evolutionary pressure.
(2) The brain has to receive and process all of the incoming sensory inputs received, including visual (sight), auditory (hearing), tactile (touch), olfactory (smell), gustatory (taste), vestibular (movement), and proprioceptive (body awareness).
(3) Human brains can carry out multiple processing, or calculation, functions at any time without the need for reprogramming (being Turing complete) which can be summarised in terms of logic or reasoning and arithmetic, algorithmic or probabilistic calculations.
(4) Environmental cognition or “Environmental Recognition” (ER) involves an entity identifying its immediate and wider surroundings and ordering and understanding those surroundings or environment.
(5) Identity cognition involves an entity separating itself from its environment and developing a sense of personal awareness and function distinct from its surroundings which is in contrast to less developed organisms or systems which do not separate themselves from their surroundings and environment.
(6) Social cognition involves the identification of other entities of the same species, or genus, within the environment which allows the formation of relations with common entities which specifically occurs naturally with humans in terms of family, friends, and community relationship construction.
(7) Communication cognition can be understood principally to arise through social interaction and engagement with other members of the species or genus, in particular, through the attachment of definitions, meaning and understanding to specific ideas, items and symbols which creates the formation of a common vocabulary and language.
(8) Emotional cognition arises through complex chemical forces management within biological systems and specifically human neocortical and limbic functions with human beings being fundamentally electrochemical (rather than only electro binary) organisms and to a significant extent driven by chemical compositional reactants with decision taken often being based on emotional and chemical reactions rather than on a strictly rational or reasoned basis.
(9) Complex code cognition and conflict reconciliation, and original code construction, occurs with humans constantly having to take complex decisions with regard to reconciling multiple conflicting objectives and relevant value systems or standards at any point in time which may only be possible in machines through targeted pre-programming and training.
(10) Causation cognition provides humans with an inherent natural sense of enquiry and the need to investigate and understand with this only being replicated in machine systems through prior programming which will then often only operate on a random, selective and correspondingly limited basis.
(11) Emotional attribution cognition (theory of mind) and alternative condition cognition (imagination) allow humans to understand and predict other peoples’ behaviour, principally through a form of extended sympathy or empathetic transfer, as well as to develop models and alternative explanations, outcomes and scenarios through highly original, creative, imaginative individual and collective thinking.
(12) Control cognition (or free will) consists of the ability to recognise or experience all of these other cognitive processes and move freely between these other functions and the consequent assumption of control and responsibility that this allows over a person’s life and existence.
The most important of these cognition functions may be the last of control cognition and ability to manage and switch between all of the other functions. This in aggregate allows humans to understand their environment and themselves, and to take control over their immediate conditions and lives. This effectively allows people to discharge the implied immediate primary functions of living organisms, which are survival and reproduction, as well as to select, specify or determine a new higher purpose, or purposes, for their own existence and lives and to attempt to follow and secure the objectives set.
Consciousness can be then understood to constitute the condition of internal and external awareness and with the ability to control all of these other neural or cognition functions and to switch between them on an unrestricted basis. It is a condition of aggregate direction or control over the other separate neural or processing functions which can be associated with the idea of free will and control over life and life’s purpose and application. This would correspond with cognition level twelve and the aggregate condition of being able to select and move between other cognition functions on an open, free and unrestricted basis with the additional condition of the awareness associated with this.
The overall effect of this is that consciousness is not a single but complex or combination of conditions. Consciousness can be considered to constitute a contestable term with multiple meanings as noted. Consciousness is complex cognition and consists of a multi-composite aggregate form of condition, including each of the other cognition functions referred to above. This would have as its core identity, motor and sensory input management supported by environmental, identity and communication meaning or understanding. Understanding and knowledge are further developed through causation and enquiry with complex conflict resolution generating common continuous values or codes of conduct and ensuring order and stability within communities. Social awareness arises through sympathy and empathy. All of this allows people to control their existence both in terms of managing immediate necessities of life as well as planning and securing longer term goals and aspirations.
Machines can develop complex programming capability which will allow them massively to outperform humans in terms of processing capacity (under cognition level three above). This can be summarised in terms of data volume (capacity), validity (quality of retrieval and accessibility), velocity (speed of processing), veracity (accuracy of processing) and verification (finality). Machines could be programmed to manage motor operations (as with medical machinery) and process mechanical sensory input as well as resolve pre-programmed conflicts using pre-programmed value systems. They can be programmed to have a sense of limited identity and to be aware of their primary and any secondary programmed instructions or functions. It is nevertheless questionable whether this sense of identity can arise naturally and could evolve further. The idea of the creation of spontaneous self-awareness from mechanical processes and function is highly questionable. Machines may not be capable of original identity creation and open complex conflict resolution nor the development of associated machine specific (individual) and agreed collective moral values. As they are only mechanical devices, they cannot process chemical inputs and may therefore not be able to develop emotional sensitivity and empathy without programming. Substitute forms of mechanical sensory input may still be creatable over time (including through genetic engineering or animaloid tools) although this may never be comparable with natural human processes. It is also unclear whether machines could be programmed to have an open investigatory capability which would identify physical and metaphysical points of enquiry for sequential investigation and solution. It is specifically unclear whether they could, for example, develop non pre-programmed philosophical enquiry rather than simply respond to questions and prompts. It is unclear more generally whether they could ever develop their own sense of purpose and objective on an open discretionary and non pre-programmed and non-human directed basis.
The effect of this is that while machine sentience may increasingly approximate human consciousness, this may always remain distinct and limited due to the natural open and autonomous nature of human biological processes and systems. Whether this is purely biological and neural or is dependent on other non-physical components remains unclear. This additional element may be considered to correspond with the metaphysical idea of the human soul which exists separately from the material body and which may be described as immortal. Greek philosophers, such as Socrates (470–399 BC) and Plato (428–348 BC), considered this in terms of the psyche (breath). Aristotle (384–322 BC) referred to this as the “first actuality” and considered that the intellect (logos) was immortal. Aristotle was followed by the Dominican priest, Thomas Aquinas (1225–74 AD). This has been re-examined subsequently, for example, in terms of the British philosopher, Gilbert Ryle’s, “Ghost in the Machine” which represents and corresponds with Cartesian dualism, which Ryle rejected. This is examined in terms of the “hard problem” in philosophy. This residual condition can be referred to as the “residuate” for the purposes of this paper. With regard to AI, this equates with the idea of an “operator” inside a machine managing the machine as noted. The biological and neuroscientific basis for the condition of awareness and control will continue to be subject to research and study. Whatever the scientific basis for this condition and the nature of consciousness, it is arguable that machines may only achieve an exceptionally wide variety of degrees of machine sentience and possibly never attain the equivalent of full biological consciousness.
F. Machine Singularity
The Singularity can be used more generally to refer to the point in time at which machine intelligence will outperform human intelligence or more specifically to the merger of human and machine intelligence which creates an exponential further growth in intelligence. The term Singularity was originally referred to by the Hungarian American mathematician, John von Neumann (1903–1957). British mathematician, Irving John Good (1916–2009), who worked with Alan Turing at Bletchley Park, predicted that an intelligence explosion with Artificial General Intelligence (AGI) could lead to a singularity in Artificial Super Intelligence (ASI). American mathematician Vernor Steffen Vinge, projected that technology would create superhuman intelligence following which the human era would end after a Singularity. American computer scientist, Raymond Kurzweil, has commented on the approaching singularity in advances in artificial intelligence. Other singularities may also arise in addition to the technological singularity referred to by von Neumann, Good, Vinge and Kurzweil. This may include, for example, world population and possibly climatic damage.
A general division is drawn for the purposes of this paper between a “Lower Singularity” and “Upper Singularity” and between non-synthetic and synthetic human machine intelligence. The Lower Singularity refers to the ability of machines be able to outperform humans on any functional capacity or capability scale. This includes Single or Narrow Lower Singularity, with regard to a limited or specific range of functions, and a Wide or General Lower Singularity, which covers any identifiable functions. The Upper Singularity then refers to the ability of machines to replicate biological self-awareness and consciousness. It is argued for the purposes of this paper that machines may never be able to reach, or breach, the Upper Singularity. Machines can already be made self-aware to a degree and an exceptionally wide range of grades, or shades, of machine sentience may be created, both intentionally and accidentally, between the Upper and Lower Singularities. The arrival of all levels of powerful machine sentience (below the Upper Singularity as defined in terms of biological consciousness), may be referred to as “the Inevitably” for the purposes of this paper, with machines out performing humans in terms of processing functions, in light of the high degree of certainty associated with this although this does not, of itself, lead to machines becoming conscious, at least, without human linkage.
Machine intelligence may then be considered to be non-synthetic with synthetic intelligence referring to combined human machine hybrid intelligence. While machines may not be able to arrive at a stage of full consciousness by themselves, this may be possible through human machine interfaces and synthetic intelligence or synthetic consciousness. A number of companies are developing various forms of invasive (physical insertion) and non-invasive (neural sensory) interface devices. These include, for example, Elon Musk’s Neuralink as well as Neurable, Emotiv, Kernel, NextMind, Meltin MMI, BitBrain, Synchron, Blackrock Neurotech, ClearPoint Neuro and BrainGate. Some of the most difficult legal and ethical issues may arise with regard to Human Interface Programmes (referred to in this paper as “HIPs” with Human Interface Programme Standards (HIPS)) and with Synthetic Consciousness & Advanced Robotic Technology (“SCART”) and Synthetic Consciousness & Advanced Robotic Technology Standards’ (“SCARTS”).
The Singularity can accordingly be reconsidered for the purposes of this paper as a series of stages based on the distinction drawn between the emergence of biological consciousness and machine sentience and non-synthetic and synthetic intelligence. This may also be tied to possible wider theoretical stages of projected energy and civilisation evolution over time which will necessarily be closely associated with the potential created by AI, AGI and ASI. 12 levels of Singularity can then be identified:
(1) Lower Narrow Singularity (“LNS”) with machines and mechanical systems being able to outperform humans in specific processing tasks or functions;
(2) Lower Sentient Singularity (“LSS”), or Machine Sentient Singularity (“MSS”), would correspond with the development of higher levels of identity and self-awareness by machine systems;
(3) Lower General Singularity (“LGS”) corresponds with machines being able to carry out any general functions in a more efficient manner than humans;
(4) Lower Network Singularity (“LnS” or “LNtS”) with machines becoming interconnected through some connection or network system;
(5) Lower Collective, or Composite, Singularity (“LCS”) with machines creating a form of common or composite intelligence;
(6) Lower Super Singularity (“LSS”) would correspond with the development of SAI and represent a further level of higher machine capability;
(7) The reference to Upper Singularity (“US”) is reserved for the equivalent of human biological consciousness which would correspond with the point at which machines obtain a degree of awareness equivalent to human consciousness although it is possible that may never be achievable on their own terms;
(8) Synthetic Upper Singularity (“SUS”), or Synthetic General Singularity (“SGS”), is used for the purposes of this paper to refer to a form of hybrid intelligence with biological consciousness being mixed with higher forms of machine thinking, processing and sentience;
(9) Synthetic Network Singularity (“SNS”) would correspond with the construction of networks of synthetic intelligent systems;
(10) Synthetic Collective or Composite Singularity (“SCS”) would correspond with the theoretical creation of a form of common or shared synthetic intelligence;
(11) Synthetic Super Singularity (“SSS”) would be constituted through the combination of biological consciousness and super mechanical intelligence; and
(12) Synthetic Ultra Singularity (“SuS”) is reserved for a series of additional possible forms of intelligence that correspond with the theoretical stages beyond the Kardashev energy scale which creates further possible levels of Civilisation growth and peak human civilisation and achievement.
The effect of this is to create three parallel taxonomies in this paper of robots, AI and the Singularity as well as to develop a full projection of all possible theoretical states of possible intelligence evolution. This assists assess and appreciate the importance and scale of the potential challenges that AI and machine sentience (AIMS) may produce. This specifically confirms the uncertain and possibly emergent nature of these threats and the need to construct a substantial and significant but still flexible and adaptive response. A possible composite control solution is constructed in the following section.
XIII. Machine Intelligence and Machine Sentience (AIMS) Control Model
A new form of AI control model can be constructed that can identify and attempt to control all possible forms of risk and exposure that arise in this area. This may be considered to be based on the “Intelligence or Cognition Wall” developed in this paper. This is capable of application with regard to building necessary and appropriate control mechanisms into operating systems as well as training programme for DNNs and LLMs. The objective is to create a form of multi-layered, overlapping and mutually self-reinforcing set of protective control mechanisms or levers that can secure the agreed longer term objectives set of avoiding injury to humans and protecting the human species and civilisation. Each of the mechanisms within the model constructed operates individually and collectively as part of a larger, total aggregate control template. This can be considered in terms of a larger MIRACLE (“Machine Intelligence & Robotic Adaptive Control, Law & Ethics”) agenda. This creates a form of “AI Model” (AIM) or AI “Managed Ordered Design Ethics & Law” (MODEL) system and MASTER (“Machine & Artificial Sentience Technology, Ethics & Regulation”) regime. ETHICS can be understood to refer to “Enhanced Targeted Higher Integrity Conduct Standards.”
A. Fundamental AI Rules (FAIR & FAITHS)
A set of core of absolute AI or robotic rules can be developed and applied within AI systems to ensure that certain absolute minimum protections are secured at all times. This can, for example, build on Isaac Asimov’s “Three Rules of Robotics” which can be revised, for purposes of this paper, to refer to: (a) no human injury or damage including by act or omission; (b) follow human instructions of directions; (c) protect and promote positive human values, objectives and order; (d) assist solve major common global human problems (such as climate and carbon control, water, food and energy security and biodiversity); and (e) protect the human species and civilisation going forward. These could be referred to as “Fundamental AI Rules” (FAIR). These might also be referred to as “Fundamental AI & Technology Human Standards” (FAITHS) or “Fundamental AI Laws, Standards & Advanced Future Ethics” (FAILSAFE).
These FAIR, FAITHS and FAILSAFE provisions could be extended to include a series of further outright prohibitions on AI use and applications and specifically “Generative AI Technology Standards” (GAITS) or “General Undertakings for Advanced Regulation & Design” (GUARD) and “Regulated Artificial Intelligence Lock (or Legal) Standards” (RAILS). “These could also be considered in terms of Prohibited Offences List & Enforcement” (POLE) or “Prohibited Offences List & Integrated Compliance & Enforcement” (POLICE). This might also include “ro(bot)” disclosure rules (referred to as “Bot or Not Disclosure” (BOND) in this paper with “Person or Not Disclosure” (POND) for synthetic systems) and origin or source disclosure (referred to as “Watermarking AI Technology” (WAIT) in this paper). These would be complemented, or integrated, into the separate human values and technology controls referred to below. All of this would also apply on a continuous and “Full Operational Regulatory Cycle & Enforcement” (FORCE) basis. The objective would be to ensure that certain core minimum absolute protections and values are secured at all times. Either the advanced AI systems adhere to these or the AIMS systems would not be permitted to be switched on or used.
All of these provisions could collectively be referred to as “Enhanced, Targeted Higher Integrity’s Conduct Standards” (ETHICS as noted) or “Enhanced Technology Higher Integrity Conduct Standards” (EtHICS). As with the other measures included in the model, these provisions could be set out in a series of integrated protocols This could specifically be referred to as UNILAW (“Universal National & International Law”), which would parallel the general computer Unicode binary reference system, or METALAW (“Master Ethical Technology Advanced Law”). This would incorporate a set of absolute minimum core protections within all systems. All advanced devices over a certain size, complexity or power utilisation rate, would be required to have these absolute minimal protections installed on a pre-programmed or pre-operational basis. This size threshold could, for example, be set at 175 billion parameters or 10x25 Floating Operating Per Second (Flops) for any new ANNs, DNNs or LMMs.
All of this and the other provisions referred to below could be built into a larger new technology control framework. These measures could be set out in a “Consolidated Official Restatement of Rules, Ethics or Conduct & Technology” (CORRECT) with a parallel “Consolidated Adaptive Protocol & Integrated Technology & Law” (CAPITAL) programme.
B. Human Advanced Values & Ethics (HAVES) and Code Advanced Values & Ethical Standards (CAVES)
A series of further sets of integrated measures, and protocols, could be incorporated depending upon the nature and use or application of the specific system. This could consist of a set of core values and a legal control model or order with separate technology, state and crisis management orders which would be installed within all larger AIMS machine and programming logic.
This would specifically include a set of minimum “Human Advanced Values & Ethics” (HAVES) or “Code Advanced Values & Ethical Standards” (CAVES). Many codes of conduct for AI and robotics refer to the need to respect human values although no attempt is made to define human values. This can be achieved by developing a consolidated restatement of the principal rights and protections provided under the main international Conventions and Treaties and European or other relevant measures in this area. This would, for example, include the Universal Declaration on Human Rights ratified 10 December 1948, the European Convention on Human Rights, which came into effect on 3 September 1953, and the European Charter of Fundamental Rights, ratified on 2 October 2000. A total of around 12 United Nations and related measures can be used to create a common core, global control values framework based on a series of “Fundamental Individual Rights & Ethics” (FIRE) and “Fundamental Rights Entitlements & Ethics” (FREE). This would operate with a more general set of essential “Core Absolute Rights & Entitlements” (CARES) which consist of separate “Conditions for Advanced Standards & Ethics” (CASE), “Common Absolute Principles & Ethics” (CAPE) and “Common Objectives, Rights & Entitlements” (CORE).
These measures could be used to create an aggregate, or extended, human values framework for use in relation to all forms of advanced AI devices. All of could be incorporated into relevant control protocols with which relevant systems would have to comply. Where this was not possible, the relevant systems would only be permitted to be used in more limited operational areas where they could not cause any larger individual human, social, network or economic damage. If this was not possible, the residual rule, and default position, would be that the machines could not be used and turned on in the absence of adhering to these core values, protocols and other framework programmes. As this would ensure a form of pre-compliance with relevant key rules, laws, regulations and ethical standards, this would also allow advanced AIMS systems to be developed in a constructive and progressive manner to make a substantial and invaluable continuing contribution to society’s welfare and societal development.
C. RAIDS, DIPS, ROBOS
An appropriate set of legal, regulatory and ethical standards could be adopted with regard to the design and operation of specific forms of higher level technology systems. This would fall within the Physical Operations Standards Technologies (POSTs), Application Robotic and Cybernetic Systems (ARCS) and Application, Platform and Entity Systems (APS) model referred to. A series of more specific sets of “Robotics & Artificial Intelligence Design Standards” (RAIDS) can then be produced which would include 12 specific “Design Integrity Principles” (DIPS) and 12 robot (“Regulated Official Behaviour Orders” (ROBOs). The RAIDS and DIPS would be supported by a series of core restrictions or prohibitions subject to specific concessions or allowances.
A further set of measures could also be adopted with this including “Remote, Applications & Platform Systems” (RAPS) standards, “Special Technology Robotics, Applications & Platforms” (STRAPS) measures and “Robotic (or Remote) Internet of Things Standards” (RIOTS). The objective would, in each case, be to establish a minimum set of safeguard standards that would apply with regard to the design, use and operation of technology in each of these areas. These would impose a series of absolute prohibitions on new technology design and manufacture which is necessary in light of the potentially irreversible and possibly fatal consequences of certain types of advanced AI work.
D. AIMS, HIPS, SCARTS
A parallel set of provisions can be developed in relation to artificial or machine consciousness systems. These could be set out in a series of more specific “Artificial Intelligence & Machine Sentience” (AIMS) principles. A basic distinction has been drawn in this paper between machine processing states and biological consciousness. A large number of grades or levels of processing state or sentience can be distinguished. The AIMS measures would govern the development and use of new forms of artificial or machine sentience with the term consciousness being reserved for biological systems in this paper. Different levels of neural activity or functionality can be distinguished which would correspond with the various grades of sentience that may be generated.
These basic provisions could be supported by a series of further access measures to be used with “Human Interface Platform Systems” (HIPS), “Human Interface Neural Devices” (HINDS) or “Human Interface Program Extraction” (HIPE) where there is human machine network connection. An appropriate set of HIPS principles can be developed to attempt to manage these over time. These would apply, for example, to Elon Musk’s Neuralink operations with invasive and non-invasive neural connection devices being developed in parallels. These measures would be similar to RAPS and AIMS although incorporate full disclosure and consent measures to protect individuals participating in such schemes. These could be referred to as “Synthetic Intelligence Design Ethical Standards” (SIDES) or “Synthetic Consciousness & Advanced Robotic Technology Standards” (SCARTS). HIPS, HINDS and HIPE would control access to such systems and SIDES and SCARTS use, conduct and liability. This could include a “synthetic” disclosure rule (referred to as “Person or Not Disclosure” (POND) in this paper which would parallel the bot BOND disclosure rule referred to). Synthetic Intelligence (SI) may become as important, if not more important, than AI over time.
This could incorporate a further set of more protective individual measures. These may include, for example, a set of “Digital Exclusive Self Identification, Genomics & Neural” (DESIGN) protections or “Individual Digital Ethics & Application Standards” (IDEAS). This could include a more specific set of “Digital Advanced Technology Attachments” (DATAs) and “Genomic Ethical & Neural Operational Standards” (GENOS). A separate set of more specific “Technical Ethical Conduct & Higher Level Standards” (TECHS) could also be applied. All of this would operate with all of the other measures referred to above within the larger CORRECT and CAPITAL programme.
E. Automatic Cancellation & Decoupling Control (ACDC/ACID)
The system would be subject to two further protective communication, or Internet, decoupling and power interruption devices as part of an “Automatic Cancellation & Decoupling Code” (ACDC) switch or “Automatic Cancellation & Interruption Device” (ACID). The ACDC could cancel any internal Intranet or external Internet connections (on a one way or two way basis) and the ACID cut-off the power supply in the event that specific concerns arose. This may also be referred to as a “Kill Interruption & Suspension Switch” (KISS). This may include a number of phases or operational stages and an automated warning system (WASPS (“Warning of Anticipated Systems Prohibition Switching”). These would be built into the internal operating systems that firms would access and could be programmed to operate on an automatic or manual firms basis or both. The system could also be set up to allow the relevant regulatory or oversight authorities to trigger the network decoupling or energy supply cancellation through an external switching mechanism where relevant (STOPS (“Special Official Termination Official Programme Switches”).
F. Public Register Operations Disclosure System (PRODS) & Pre-Deployment Testing (PAIDS)
All significant advanced AI systems, beyond a certain size, complexity or power consumption rate, would be subject to a formal “Public Register Operations Disclosure System” (PRODS) which would create an open and transparent registration regime for all advanced devices and forms of associated research initiatives and activities. This would ensure that all private and public advanced AI systems were registered and had to comply with any appropriate conditions and limitations that may apply. This could be accompanied by a strict testing regime than ensures that all systems have been fully assessed and validated before public deployment. This may be referred to as a form of “Public AI Deployment” (PAID) or a “Public AI Deployment Security” or system (PAIDS) which could incorporate the prohibitions referred to previously.
A separate “Prohibition, Regulation, Oversight & Disclosure” (PROD) regime could also be set up to create a graded or staggered classification system. This could operate on the basis of a 4 staged, or 4 level, control system based on: (i) outright “Prohibition”, (ii) direct “Regulation”, (iii) firm internal “Own Oversight” (Self-regulation); and (iv) “Disclosure” (PROD) frameworks. This would be similar and reflect the system adopted within the EU under the AI Act. Level 2 (Regulation or Regulated) devices would in particular, be subject to initial testing and appropriate continuing compliance and supervision.
Some countries may insist on conducting separate research for military purposes which could be made subject to further strict international guidelines to govern the development, use and application of such AI related operations. This could be set out in specific international measures or the provisions referred to further below.
G. Domestic Advanced Technology Agency (DATA) & Global Artificial Intelligence Agency (GAIA)
All countries would maintain an appropriate “Domestic Advanced Technology Agency” (DATA). This might also be referred to as a “Domestic Advanced Technology Entity” (DATE) or “General Advanced Technology Entity” (GATE) or more specific AI Authority (“AIA”). This would maintain the registration system referred to and develop appropriate laws, regulations and ethical provisions as well as have power, for example, to collect information, impose restrictions or conditions or prohibit certain activities outright subject to relevant legislative or parliamentary authority.
An international AI agency could also be set up such as with a “Global AI Agency” (or GAIA) or “International AI Agency” (or IAIA). An international regulatory model could be developed for use in other countries and referred to as a “Global Regulation of Artificial Intelligence Law” (or GRAIL). This common core global legal control model would specifically include an online “Digital Society Law Framework” (or “Digital Integrated Society Control” (DiSC) framework or “Digital Integrated Society Control System” (DiSCS).
H. General Regulation AI Law (GRAIL) & Liability & Sanctions (COBs & ICEs)
All of the measures referred to under this AI model could operate on a standalone basis or be provided for under a national statute with a supporting international treaty or convention. A model set of provisions for the domestic implementation could be designed and referred to as a “General Regulation of Artificial Intelligence Law” (or GrAIL) which could incorporate or work with the separate “Global Regulation of Artificial Intelligence Law” (or GRAIL) at the international level. These measures could include specific protections against third party misuse on deployment. These could be referred to as a “Malicious Use Safety Envelope” (MUSE) or “Malicious Use Supervision Control Law & Ethics” (MUSCLE) regime. The objective would, in so far as possible, be to prevent the misdirection or misapplication of technology by third party actors following public deployment.
Model codes of conduct, or protocols, could be produced in the AI and technology areas more generally which could be collected and made available through a set of online virtual measures referred to as the “AI Compendium” modelled on the FSB Compendium of Standards in the financial area. A core set of parallel core AI measures could also be provided for again on an FSB model of Key Standards. A supporting online “AI Directory” (AID) could also be constructed that would contain relevant html links to all relevant domestic implementation measures across the world.
Separate liability rules and penalties would have to be applied in all countries with regard to all offences concerning AI related activity and misconduct. This would include using AI for criminal purposes or other forms of misconduct or possibly for AI generated liability. All domestic legal systems maintain a wide array of penalty provisions in relation to all forms of criminal and public order offences as well as other civil remedy systems. Certain new AI specific offences may be required although many aspects of misconduct may be most efficiently achieved, in practice, through the development of a series of “Criminal Offence Bridges (or Breaches)” (COBs) and extending existing criminal laws to ensure that they applied equally to all AI related activities and in all AI connected environments. Agreement on supporting international sanctions would also have to be secured, which could be set out in an “International Convention on Enforcement and Sanctions” (ICES).
I. AI and Lethal Autonomous Weapons (LAWs)
Specific rules or guidance could be adopted to apply with regard to their use of Lethal Autonomous Weapons (LAWS) or lethal “Remote Autonomous Weapons” (RAWS) in this paper. These could include measures applying with regard to “Lethal Autonomous Biological weapons” (LABs), “Lethal Autonomous Nano” weapons (LANs) or “Lethal Autonomous Nano Devices” (LANDs). These would apply, in particular, where any “Loss of Individual Existence” (LIFE) decisions had to be taken. The relevant measures could be also referred to as “Loss of Individual Life Laws & Ethics” (LILLE) standards. This could be made subject to a formal automatic “Co-Human Machine Decision” or “Co-decision” (CODEC) procedures with the relevant instructions having to be taken by a human operator subject to specific rules governing such matters. This could be drafted in coordination with other guidelines and procedures imposed under military laws, rulebooks or manuals governing life-threatening conduct.
J. Continuous Assessment Review & Effectiveness System (CARES)
An effective monitoring system could be established at the domestic and international levels. This could be administered by the Global or International AI Agency (GAIA or IAIA). The objective would be to ensure that all relevant international standards and principles were properly applied on a domestic and cross-border level. All of this would be subject to continuing review and revision. This could be referred to as a “Continuous Assessment Review & Effectiveness System” (CARES). Appropriate corrective action would have to be taken as necessary where measures were not properly adopted and applied or additional risks and exposures arose.
K. Implementation Protocols (PASS & STOP)
An effective implementation regime would have to be maintained at all times. This could, for example, be secured through the use of “Protocol Regulation Official Orders” (PROTOs) or “Public Regulatory Oversight Technology Based Official Control & Order Laws” (PROTOCOLS). All relevant key requirements and procedures and processes would be set out in these protocols which would be managed and administered under an agreed international adherence and implementation system. This would be referred to as a form of “Protocol Adaptive Security & Stability” (PASS) model or “Technology Adaptive Protocol System” (TAPS).
These protocols could include a series of more specific sets of legal, regulatory, ethical, governance, guidance and computer code standards together. These could be referred to as “Special Technology Order Protocols” (STOPs), “Special Technology Regulation Advanced (or Action) Protocols” (STRAPs), “Special Technology Ethics Protocols” (STEPs), “Special Technology Advanced Management Protocols” (STAMPs), “Special Technology Regulatory Information Protocols” (STRIPs) and “Special Technology Execution (or Enforcement) Protocols” (STEPs). These could be given effect to under the Protocol Assisted Stability & Security (PASS) regime and Consolidated Adaptive Protocol & Integrated Technology & Law (CAPITAL) agenda noted. All of the various sets of standards referred to above may be incorporated and implemented through these protocols and protocol regime.
All of this could operate on the basis of a revised set of Public International Law (PIL) measures which could be adopted as a formalised set of “Common Heritage of Humanity” (CHH) or “Common Concerns of Humanity” (CCH) obligations which is already recognised under PIL although arguably underdeveloped at this stage. Global adoption and application could be supported through a further form of new “Global Functionalism” (or Neo-functionalism) or “Technology Functionalism” (or Technology Neo-Functionalism).
L. Global AI Treaty (Gait, Gilt & Gift)
All of this could be given effect to under a separate set of international treaty measures. This could be specifically be included within a “Global AI Treaty” (GAIT) or “Global Integrated Law & Technology Treaty” (GILT) framework. This could again be incorporated into a larger international “Global Investment, Finance & Trade (GIFT) Treaty” which would effectively create a form of “Bretton Woods 3” Treaty system to build on and develop the earlier Bretton Woods Treaty arrangements entered into in July 1944 to create the post-world war International monetary, development and trade system. A more specific “Global Reciprocal Economic Area Treaty” (GREAT) could be agreed to establish a new “Global Electronic Market” (GEM) or “Digital Advance (or Adapted) Market” (DAM) for trading new advanced technology.
A supporting set of measures could be considered with a “Financial Investment Regulatory & Sustainable Technology & Security Treaty” (FIRSTS) or “Economic Market Extension Regulation Growth & Ethics” (EMERGE) Treaty. These may include a more specific “Financial Assistance & Cooperation Treaty” (FACT) or “Partnership Assistance & Cooperation Treaty” (PACT) to provide support for emerging and developing economies with further “Sustainable Assistance, Finance & Engagement” (SAFE) and “Sustainable Assistance, Value & Ethics” (SAVE) measures. All of this can be considered over time and built into a larger adaptive, dynamic and emergent new AI control model at the national and international levels.
XIV. Artificial Intelligence and Machine Close
Technology will continue to change and evolve. This will bring substantial benefit and advantage across society. This is relevant in all areas of new technological engineering and innovation, in particular, in the physical and material, access and infrastructure, applied or substantive and collective social or global areas. This represents a continuous and relentless substantial body of emerging new common knowledge, advance and understanding. Astonishing progress has been possible in all fields of new forms of computing (including photonic, neuromorphic, biological, analogue and quantum), telecommunications (including web3), data analytics, BioTech and NanoTech.
Many of the most exciting and significant areas of development have nevertheless arisen in the areas of robotics and artificial intelligence. Massive further advance is expected and inevitable. Robotics and cybernetics are of substantial value in industrial, construction and manufacturing areas as well as in relation to medicine, health and agriculture. AI may bring forward even more substantial progress across all areas of commercial, government and social activity especially through machine reading, machine learning and deep learning. This has created a whole new field of digital cybernetics or Artificialis Intelligentia or Intelligentia Digitalis.
A series of distinctions have been drawn in this paper between AI and machine intelligence, machine learning, machine robotics, machine cognition and machine sentience with all of the core functions identified within each of these. A number of specific types of cognition have been identified, including motor, sensory, processing, environmental, identity, social, communication, chemical, conflict, code, causation, attribution and imagination and control cognition. While infinite degrees and layers of machine sentience will arise, the term consciousness is reserved for reference to biologically equivalent systems. A series of comments and conclusions have been drawn with regard to the massive advances expected but necessary inherent limitations in machine sentience and machine evolution and architecture. The most difficult and unpredictable areas of future direction may remain in the areas of hybrid or synthetic biological and synthetic intelligence (SI) and the creation of new forms of composite consciousness and awareness.
In response to all of this, it is necessary to construct a new control framework for all advanced forms of technology, including relevant legal, regulatory and ethical provision. While laws set out core rights and obligations, the detail has often to be extended through the use of more prescriptive regulatory provisions. Ethical standards then establish higher level principles that can be applied on a continuing basis including in more uncertain and emergent areas. The potential difficulty that arises with regard to statutory hard law is that it is often difficult, slow and expensive to amend while judicial legal construction is limited to the accident of instruction and litigation. Ethical provisions may accordingly become of even more importance in the technology area over time. Ethical provisions nevertheless suffer from their own limitations in terms of generality, consequential lack of specificity and non-enforceability.
A new form of composite control instrument may still be possible through the use of protocols (PROTOs and PROTOCOLs) based on diplomatic practice where protocols can have been assigned legal effect. Protocols can separately be used in other technical areas to set out processes and procedures. A new form of combination protocol can then be developed which includes various components. This may consist of enforceable provisions (with absolute (“brightline”) obligations), non-enforceable principles (higher level ethical standards), aspirational objectives (policy targets), more detailed guidance (including possible processes and procedures with timelines) and, where relevant, implementing computer code (“smart law”) to allow this to be incorporated into programmes and algorithms (including through “smart contracts” or “smart regulation”). One or more protocols could be adopted in each of the technology areas referred to above. These could then be given effect to under a the Protocol Adaptive Security and Safety (PASS) implementation system and Technology Adaptive Protocol System (TAPS) or Special Technology Operational Protocol System (StOPS) referred to.
It is accordingly essential that a new relationship is constructed between Law, Ethics and Technology (LET). It is possible to create a new “Legal & Ethical Framework for Technology” (LEFT) or dedicated “Law, Ethics & Technology Training (or Target) Execution (or Enforcement) Regime” (LETTER). An appropriate set of control measures can be adopted to be incorporated into a larger Consolidated Official Restatement of Rules, Ethical Conduct & Technology (CORRECT) programme. This could be based on an appropriate Conduct of Official National Technology with enhanced Regulation, Oversight & Law (CONTROL) framework and Consolidated Official Managed Program for Law, Ethics & Technology Enforcement (COMPLETE) programme. In so doing, this will implement a structured MIRACLE (Machine Intelligence & Robotic Adaptive Control, Law & Ethics) agenda as part of a larger MODEL (Managed Ordered Design Ethics & Law) system.
A large number of difficult social and ethics issues will necessarily arise in each of these technology areas. Many, if not all, existing fields of law and regulation will be impacted and have to be revised appropriately. The inherent limitations within hard law and regulation may necessitate further focus on the development of a whole series of new technological and ethical protocols to ensure that each new advance can be responded to in a timely, relevant and sufficiently detailed and effective manner. Difficult political and social choices will have to be made especially in terms of ensuring appropriate degrees of equality of access, capacity and benefit with all of the new forms of augmentation and advance that will necessarily follow. Complex social and political choices and challenges remain to be properly identified and resolved. AI and wider Technology Law and Ethics will become of increasing significance in all of these fields with the need to construct a new complete and coherent composite response framework. The future has nevertheless to remain within society’s choice and control and not be assigned to any emergent and uncontrollable processes, forces or entities. The choice and responsibility are ours.