chevron-down Created with Sketch Beta.

The International Lawyer

The International Lawyer, Volume 57, Number 2, 2024

Artificial Intelligence (AI) Law, Rights & Ethics

George A Walker

Summary

  • Continuous change in the nature and development of technology have had an enormous impact on the evolution of business, commerce, government, and wider social systems and societies throughout history.
  • The speed and depth of change has also increased dramatically and often exponentially, including through Moore’s Law,  and with wider recursive and exponential growth effects.
  • All of this has generated substantial new legal, regulatory, and ethical issues that have to be examined for the first time.   
  • A whole series of novel and original responses and resolutions are required to protect all relevant rights, interests and entitlements, and to preserve society and civilisation going forward.
  • The purpose of this paper is to review the nature of the most significant advances in AI technology that have occurred and the nature of the emerging legal, regulatory, and ethical responses adopted.
Artificial Intelligence (AI) Law, Rights & Ethics
Thomas Vogel via Getty Images

Jump to:

I. Artificial Intelligence Introduction

Continuous change in the nature and development of technology have had an enormous impact on the evolution of business, commerce, government, and wider social systems and societies throughout history. This has resulted in significant innovation and massive advantage. The speed and depth of change has also increased dramatically and often exponentially, including through Moore’s Law, and with wider recursive and exponential growth effects. This may only become even more dramatic over time. All of this has generated substantial new legal, regulatory, and ethical issues that have to be examined for the first time. A whole series of novel and original responses and resolutions are required to protect all relevant rights, interests and entitlements, and to preserve society and civilisation going forward.

Technology is made up of a number of significant elements which include physical, digital, and global components and perspectives. Physical technology can be summarised in terms of power (energy), propulsion (transport), processing (building), production (manufacturing and materials science), and preservation (carbon and climate management and sustainability). Digital technologies consist of access and infrastructure technologies, such as computing, telecommunications, cloud data systems, blockchain, and internet advance (including Web3), as well as access or applied technologies, including specific automation (and smart contracts), big data analytics, cryptography and biotechnology (BioTech), nanotechnology (NanoTech), and machine reading, machine learning, and machine sentience. The most significant developments in access technologies are nevertheless unfolding in the areas of artificial intelligence (AI) and robotics. Generative AI (GenAI) alone is estimated to be worth around $2.6–4.4 trillion annually across sixty-three possible uses cases.

A number of specific difficult issues arise with regard to AI and Robotics (AIR) law and regulation as well as separate sensitive moral and ethical challenges and concerns. Control issues in AI and Robotics law include data protection and privacy, cryptographic and other system security, private property and intellectual property rights protection, preventing bias and manipulation, ensuring fair judicial proceedings and sentencing, appointment and replacement situations (in employment), machine human interaction and machine interface, machine rights, and lethal autonomous weapons (LAWs). More general challenges arise with regard to the extent of permitted autonomy, possible loss, control, compensation, and policy base which can be summarised in terms of release, risk, regulation, recompense, and reform or rectification. All of this has become of even more significance with the continuing and relentless advance of modern technology and AI and robotics.

The purpose of this paper is to review the nature of the most significant advances in AI technology that have occurred and the nature of the emerging legal, regulatory, and ethical responses adopted. The nature of robotics, cybernetics, and nanotechnology are examined. The meaning and value of artificial and machine intelligence are considered in further detail. The most significant recent advances are referred to. The emerging control responses adopted at the international, European and domestic levels are considered separately. The adoption, use, and application of these advances in the financial area are reviewed more specifically. An original new composite control model is constructed. Some of the most significant issues and debates generated in this area are referred with a provisional set of comments and conclusions produced.

II. Artificial Intelligence Debate

The impact of AI and its possible future in terms of relative benefit and risk have been subject to substantial debate. This can bring significant benefit and value as well as new and emergent threats and exposures. A number of significant difficulties arise with regard to this debate especially in terms of the nature of machine intelligence itself, the relationship between human and machine intelligence, the extent to which and when machine intelligence will surpass humans, the extent to which future forms of general or super intelligence will emerge and can be managed and controlled, and the future of synthetic human machine intelligence.

The brilliant British mathematician, Alan Turing (1912–1954), considered that “it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers.” With Turing developing a set of tests to assess machine intelligence, he stated “[a] computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” The equally-brilliant British theoretical physicist, Stephen Hawking (1942–2018) warned that, “[t]he development of full artificial intelligence could spell the end of the human race.” Hawking commented that “AI is likely to be the best or worst thing to happen to humanity.”

A number of separate industry, academic and private opinions have been expressed over time subsequently. AI itself has been described as our “last invention.” Social philosopher, Daniel Schmachtenberger, has stated that AI may either result in massive forms of “disaster” or “dystopia,” especially through the creation of “surveillance states,” with the need to identify a corresponding third discreet model or “third attractor.” Satya Nadella, CEO of Microsoft confirmed that, “AI is the defining technology of our times.” Jensen Huang, CEO of NVIDIA, noted that, “[s]oftware is eating the world, but AI is going to eat software.” Jeff Bezos, founder and Executive Chairman of Amazon, considered that “[w]e’re living in ‘a golden age of AI.” Tim Cook, CEO of Apple, expressed the view that, “[w]hat all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.”

Two of the “godfathers” of AI, Geoffrey Hinton and Yoshua Bengio, have expressed significant concerns. Hinton was described as leaving Google to be able to speak more freely about his concerns and has warned about the possibility of AI “wiping out humanity” as not being “inconceivable.” Bengio accepted that AI did not create a present existential threat although this could become “catastrophic” in future with there being “too much uncertainty” as to how AI might develop.

Another pioneer of AI, Professor Stuart Russell, has referred to an “alien invasion,” stating that “the stakes couldn’t be higher” and that the UK was failing to protect itself against the “existential threat” of machines. American author James Barrat has warned of the unforeseen consequences and threats of AI and AGI. Matt Clifford, the UK Prime Minister’s AI adviser and Chairman of the Advanced Research and Invention Agency (ARAI), has warned that AI could produce deadly weapons that “kill humans” within two years. Matt Clifford and senior diplomat, Jonathan Black, were appointed to the UK Department for Science, Innovation and Technology (DSIT) to lead the UK’s proposed inaugural international AI Safety Summit in November 2023, to be held at Bletchely Park, where Alan Turing worked with the Government Code & Cypher School (GC&CS) during World War II.

The third “godfather” of AI, Meta chief AI scientist, Yann LeCun, has nevertheless referred to the threat of AI causing an existential risk to humanity as “preposterously ridiculous.” AI would not “take over the world” with this being “a projection of human nature on machines.” This is referred to as the “android or anthropogenic fallacy” for the purposes of this paper. LeCun adds that “AI would undoubtedly surpass human intelligence” although this would take decades. AI would bring “a new renaissance for humanity.” LeCun has also stated that ChatGPT does not have human level intelligence and is “not [even] as smart as a dog” or a child. Andrew Ng, Computer Scientist and Global Leader in AI, has added that, “[d]espite all the hype and excitement about AI, it’s still extremely limited today relative to what human intelligence is.” American computer scientist and Google Director of Engineering, Ray Kurzweil, has highlighted the massive potential contributions and almost unlimited future growth potential of the technology. American software engineer, Marc Andreessen, published a separate Techno Optimistic Manifesto in October 2023.

Elon Musk, CEO of Tesla, SpaceX and co-founder of OpenAI and The Boring Company has warned that “[i]f you’re not concerned about AI safety, you should be.” Elon Musk added that, “[t]here certainly will be job disruption. Because what’s going to happen is robots will be able to do everything better than us . . . . I mean all of us” and that, “I am not sure exactly what to do about this. This is really the scariest problem to me, I will tell you.” He has specifically warned of “a deep intelligence in the network.” He had stated earlier that “AI doesn’t have to be evil to destroy humanity – if AI has a goal and humanity just happens to come in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.” Humans had to “merge with AI to avoid the risk of becoming irrelevant.” The solution for Elon Musk is to increase regulatory oversight of the development and implementation of AI as soon as possible.

Professor Sharon Vallor has noted that “a future with sentient machines who think with us could, in principle, be every bit as good - and humane - as a future without them,” although difficulties remained with “algorithmic discrimination and disinformation to growing economic inequality and environmental costs.” Professor Vallor warned that the real threat from AI is not existential but that it may “devalue the political and cultural currency of humane thought.” The American Jewish philosopher, Hans Jonas (1903-1993), has stated that a “technopoly” that supports the “quenching of future spontaneity in a world of behavioural automata” could place “the whole human enterprise at its mercy.” The Israeli historian and philosopher, Yuval Noah Harari, refers to AI in terms of [a]lien Intelligence” and the dangers of AI being able to manipulate humankind’s “storytelling” capability and control of communication and language.

The effect of all of this has been to create two AI camps. There are the “boomers” and the “doomers,” the “maximalists” and the “minimalists,” the “aggregators” and the “substractors,” the “accelerationists” and the “decelerationists,” the “techno-optimists” and “techno-pessimists,” and those promoting “technological utopia” and those predicting “technological dystopia.” All of this can be summarised in terms of “effective accelerationism” (e/acc) with the opposite being referred to as “defective acceleration” (d/acc) for the purposes of this paper. This remains a complex and confused but essential area of debate and one that will have a profound impact on the development and evolution of society and civilisation and on the final perceived peak achievement of the human race.

III. Robotics and Cybernetics

Artificial intelligence is generally considered with robotics which may be relevant in many cases although the two must be distinguished. Robotics (or RoboTech or BotTech in this paper) is concerned with the use of programmable, or autonomous, machines or mechanisms that can carry out physical functions generally using sensors and actuators. The term robot is derived from the word robota or work. This was first used by the Czech writer, Karel Capek, in 1920 in the play R.U.R. (Roussum’s Universal Robots). This may include an automaton, or automata, and mechanical devices intended to imitate humans or human functions. Robotics falls within the field of cybernetics, which is concerned with control systems. Robots can be used for agricultural, industrial, construction, domestic and military purposes as well as human collaboration (such as with cobots) and in nanotechnology (including nanobots). Roboethics is concerned with the design, manufacture, use, transfer, and destruction of robots with possible robot rights involving potential issues of identity and liability.

Robotics may also be used for military purposes, including drones, such as unmanned aerial vehicles (UAVs) or remotely piloted aircraft (RPAs), as well as robotic soldiers (BattleBots, Cyborgs, and supersoldiers or SoldierBots), or other mobile controlled robotic weaponry. Autonomous systems may either be used for defensive or offensive purposes. Specific concerns arise where high degrees of automation are incorporated creating Lethal Autonomous Weapons (LAWs) or lethal Remote Autonomous Weapons (RAWs) in this papers. An Open Letter was signed by 1,000 AI and robotics experts calling for a ban on autonomous weapons in July 2015 which was presented at the 24th International Joint Conference on Artificial Intelligence (IJCAI-15) in Buenos Aires.

Biorobotics is concerned with the study of robotics with biomedical engineering on an inter-disciplinary basis to understand how robotics and biological systems work and to develop new combination biomechanical (or biomechatronic) or biomedical devices. While robotics involves machine-regulated mechanical function; control and manipulation; and cybernetics communication and control systems; biorobotics includes bionics, or biological electronics, and cybernetic organisms (cyborgs). This would specifically incorporate the development of animaloid and humanoid robotic systems.

Nanotechnology (NanoTech), or molecular nanotechnology, is concerned with the development of technology at the atomic and molecular (including supramolecular) levels for manufacturing and production purposes. NanoTech involves inorganic and manmade materials while BioTtech uses living organisms. NanoTech and nanoscale technologies attempt to manage and manipulate matter between one and 100 nanometers or one billionth of a meter (0.000000001m). Nanotechnology is essentially concerned with the creation, separation, combination, consolidation, deformation, or destruction of materials at the atomic or molecular level.

From all of this, a new taxonomy of robots and bots can be constructed for the purposes of this paper. This would include worker bots (such as CoBots or WoBots) as well as household bots (DomBots or HouseBots); emotionally sensitive or empathetic “eBots” or “EmBots;” gender based “fBots” (female), “mBots” (male); and “cBots” (children). This would also include intelligent “iBots” or AI controlled “AIBots;” fully autonomous “AutoBots;” network or internet-controlled “NetBots;” software controlled actuators (SoftBots); nano sized “NanoBots;” “Artificial General Intelligence” (AGI) controlled “AgiBots;” and “Artificial Super Intelligence” (ASI) controlled “SuperBots” or “UltraBots.” As discussed earlier, robots known as BattleBots and SoliderBots have military uses. A series of further types of new entities can be identified including with manufactured anthropogenic Droids, human machine enhanced Cyborgs; human machine hybrids; and copy systems.

Such a taxonomy is useful in developing a new language and classification systems for future robotic and human machine connection systems. This can be extended to create a parallel architecture or taxonomy for AI systems and for different levels of singularity in this paper.

IV. Artificial Intelligence and New Technology Transformation

Social advance will be impacted by more general innovations in technology and new or future technology, with parallel advances in infrastructure improvements. This may be referred to as CoreTech, NewTech or FutureTech. The most immediate and profound changes are nevertheless expected in the areas of AI (AITech) and machine intelligence (MachineTech) and with intelligence controlled robotics (RoboTech). The nature and meaning of AI can be considered in further detail.

A. Artificial Intelligence

Artificial intelligence is another complex, combination, contestable and polysemous or polysemantic term with different meanings. The meaning and scope of AI remains unclear. The term Artificial Intelligence (AI) was first used by Professor John McCarthy at a Dartmouth College conference in 1956. Two initial misunderstandings generally arise with AI being considered in terms of biological intelligence (BI), with this replicating or simulating human intelligence, and with artificial autonomous intelligence (ATI), which a specific system may or may not exercise. The meaning and scope of artificial is unclear and specifically whether this simply means non-biological or otherwise. The meaning of intelligence is also generally unspecified and undefined with intelligence often confused with consciousness.

Artificial can be understood in this paper as referring to mechanical, electrical, digital or non-biological. Intelligence can be considered in terms of the ability to carry out one or more neural functions or processes on a programmed, remote or autonomous basis. AI is then defined, for the purposes of this paper, in terms of the carrying out of any data or neural analysis, processing or decision taking function on a programmed, directed, or autonomous basis. This essentially involves cognition, awareness, and the ability to switch between control and cognitive functions. A twelve part “Intelligence or Cognition Wall” is constructed to examine this in further detail.

From this, the wider AI debate may more accurately be considered in terms of machine intelligence (MI) to avoid the points of confusion referred to. Various levels of MI can then be created to refer to all of the different types and grades, forms or levels of machine function or cognition that may arise which can be examined in terms of machine sentience. Machinesentience is used in this paper to distinguish biological equivalent consciousness from machine cognition and machine awareness. The provisional position adopted is that machines, however powerful and sophisticated, will not achieve human comparable, or equivalent, consciousness apart from through some form of synthetic or combined human machine intelligence.

In terms of current regulatory treatment, the Financial Stability Board (FSB) has defined artificial intelligence as “the application of computational tools to address tasks traditionally requiring human sophistication.” The FSB has referred to AI in terms of “the theory and development of computer systems able to perform tasks that traditionally has required human intelligence.” Machine learning is a “method of designing a sequence of actions to solve a problem [algorithms] which optimise automatically through experience and with limited or no human intervention.” Machine learning is based on automated optimisation, prediction and categorisation, and not with causal inference. Machine learning uses classification and regression algorithms and analysis.

The FSB considers machine learning as part of AI with Big Data analytics a form of parallel connected activity. Big Data analytics is generally used to describe “the storage and analysis of large and/or complicated data sets using a variety of techniques including AI.” The FSB includes machine learning within AI and supervised learning, reinforced learning, and unsupervised learning within Big Data analytics coupled with deep learning acting as a bridge technique. Machine learning generally uses self-reflective algorithms that improve their functional capability over time. Big Data analytics is concerned with the identification of patterns, correlations or trends in large data sets for predictive purposes.

AI can more specifically be considered in terms of machine perception, machine processing, machine learning, machine activation and simulation, and emulation or emotional empathy. Machine perception includes object recognition or sensory input. Machine processing consists of problem solving, reasoning, and decision making. Machine learning, as noted, consists of reinforcement, supervised, and unsupervised learning as referenced above. Machine actuation involves the passing on of instructions to create motor effects in the real world which includes cybernetics and robotics. Natural language processing (NLP) is concerned with machine understanding of human text and spoken communication. Machine empathy is concerned with making machines more emotionally sensitive, reactive, and self-aware. Nevertheless, machines systems are generally still correlation—rather than causation or causality— based. Machine programmes identify patterns and correlation and generally operate on a deterministic basis, with the most significant recent advances being possible in the area of deep learning and “Artificial Neural Networks” (ANNs).

A number of different types of AI can be identified in the existing writing. These can, for example, be considered in terms of capability and functional classifications. A more complete parallel AI taxonomy, or architecture, can be developed for the purposes of this text. AI capability levels would include Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI), and Artificial Super Intelligence (ASI). Functional divisions within this could include Reactive, Limited memory, Theory of Mind, and Self-aware level machines. A number of other divisions and classifications can also be consolidated, or reconciled, for the purposes of this text. Reference may then be made to Artificial Biological Intelligence (ABI), Artificial Autonomous Intelligence (ATI), Artificial Augmented Intelligence (AUI), Artificial Generative Intelligence (AgI), Artificial Network Intelligence (AnI), Artificial Collective (or Composite) Intelligence (ACI) as well as Synthetic Artificial Intelligence (SAI or SI), Synthetic Network Intelligence (SNI), Synthetic Super Intelligence (SSI), and Synthetic Ultra Intelligence (SUI). It is possible that SI, which connects people with AI machines through the of invasive or non-invasive neural connection devise, may become even more important than simple AI over time. Different acronym based formulations may also be developed to explain the ontogeny or evolution of AI. While it may be possible to contain all of the principal threats that arise with AI more generally, which will determined by the inherent limitations within machine intelligent systems, it is possible that the most significant difficulties may arise with synthetic or hybrid forms of human MI and SI. All of this can also be considered with different stages of Civilisation evolution and Singularity.

B. Artificial Intelligence Evolution

Machine learning and AI generally began with studies in formal reasoning by ancient Greek, Chinese, and Indian scholars, which was followed later by calculating machines, such as the British mathematician, Charles Babbage’s Analytical Engine in 1837 and Alan Turings work. Modern AI study began with a workshop at Dartmouth College in Hanover, New Hampshire in 1956. A number of advances were made during the 1960s and later in the 1980s, although there were two periods of underinvestment (referred to as the “AI Winter”) around 1974–1980 and 1987–1993. AI study recovered again during the late 1990’s with increased hardware and software capability, inter-disciplinary interest and a focus on more specific useable solutions. The FSB summarises more recent drivers of interest in the use of AI in the Financial Technology (FinTech) area in terms of such supply side factors as improved technology and infrastructure in data supply, and demand factors including profitability, competition, and regulatory demand–– especially in terms of prudential and data reporting, anti-money laundering, best execution and other obligations.

“AI generally operates through the use of algorithms which are simple or more complex overlapping structured computer program instructions.” A number of specific objectives can be identified including symbolic, sub-symbolic, connectionism, statistical learning and integrated approaches. A number of challenges can be identified including with regard to knowledge representation, prediction, automated enhancement, perception, NLP, andcollective, group or social intelligence. Particular tools are used such as search optimisation, formal logic, probability, classification, and neural networks. As financial institutions have increasingly adopted machine learning techniques, authorities have begun to examine the regulatory issues that may arise.

C. Artificial Intelligence Objectives

Early AI studies focused on neurobiology; neuroscience; and cybernetics, which is the study of control and communication systems. Meetings were held during the 1940s and 1950s at the Teleological Society at Princeton University and the Ratio Club in the UK which was set up by the neurologist John Bates following a symposium at the Society of Experimental Biology in Cambridge in July 1949. Research, during the 1950s until the 1980s, focused on symbolic artificial intelligence using human readable representations of problems, logic, and search. This was referred to as “Good Old-Fashioned Artificial Intelligence” (GOFAI). Approaches included cognitive simulation, logic based symbolism, anti-logic based symbolism, and knowledge based symbolism. Attention switched in the 1980s to sub-symbolic approaches focusing on specific problems, including embodied intelligence, computational intelligence, and soft computing. By the 1990s, attention was transferred to using statistical learning techniques with a focus on scientific results. These approaches would be integrated subsequently.

D. Artificial Intelligence Challenges

AI has to replicate a number of more specific intelligence functions. These include reasoning, knowledge representation or engineering, prediction, learning or automated enhancement, natural language processing (NLP), sensory perception, robotic manipulation, and collective group or social intelligence. The success of much of the later work in this area has arisen through a focus on specific or dedicated applications (“Narrow AI” or ANI) rather than securing “Strong AI” (SAI) and with artificial general intelligence (AGI). AGI would require the availability of an extensive body of “commonsense knowledge” which humans rely on in taking decisions and be able manage all of the separate processes referred to including reasoning, prediction, learning, comprehension, perception, application, and social reaction.

E. Artificial Intelligence Tools

A number of more specific tools can be used to solve problems through machine learning and AI. These include searching (and optimisation), logic, classification and controllers, probability, and neural networks. The most important future advances may be possible in such areas as neuromorphic neural network systems.

AI has been built into many common daily processes, such as in internet search engines, optical character recognition, and in banking and financial transactions. Tasks are often withdrawn from the definition of machine intelligence and AI once they can be carried out successfully. This is referred to as the “AI Effect.” Substantial progress has been made. It nevertheless remains difficult to replicate many neural functions. Much of this is unconscious and based on biological evolution and natural selection. Machine intelligence and AI has not been able to achieve the substantial advances that were expected. The computer program, AlphaGo, developed by Google’s DeepMind Technologies Ltd AI unit in the United Kingdom, beat the Go world champion, Lee Sedol, in March 2016 and beat Ke Jie in 2017. Substantial difficulties remain in arriving at Artificial General Intelligence (AGI) and solving all AI complete (or AI hard) problems which still require human intervention.

Continuing technological development is expected to bring substantial benefit across society. These systems are expected to increase further and faster with the recursive nature of technology. Generative AI (GenAI) applications, such as ChatGPT, GitHub Copilot and Stable Diffusion, Anthropic Claude and Google PaLN2 on Bard, have accelerated rapidly. GenAI could contribute $2.6–4.4 trillion to the global economy, which compares with the UK GDP in 2021 of $3.1 trillion. Growth was specifically expected in the areas of customer operations, marketing and sales, software engineering, and R&D, with banking markets benefiting $200–340 billion annually, and $400–660 billion in retail and consumer goods. Non-generative AI and analytics could create $11–17.7 trillion, which could further increase by 15–40% through the use of GenAI. Substantial risk could nevertheless also arise, such as with regard to social and economic or macro shocks, misuse, tool use, and capability overhangs with wider systemic and emergent threats, as well as military and security risks with the need for appropriate guardrails to be put in place.

F. Artificial Intelligence Advantage and Disadvantage

All of the benefits and associated risks and threats of technology have been listed in many reports and papers in different ways without any clear structure. A more complete and sophisticated advantage and disadvantage template can be constructed for FinTech, and for technology and AI more specifically. This is based on eight separate perspectives of technology, business, users, markets, regulation, infrastructure, government, policy, and wider financial and social stability. Specific FinTech advantages can, for example, be summarised in terms of disintermediation, digitalisation, identity digitalisation, authentication, automation, replication, reconciliation, modularisation, personalisation, interlinkage, codification and shared function, shared responsibility, and shared liability. Corresponding disadvantages would include fragmentation, asset protection, loss of privacy, complexity, displacement, separation, competition, concentration, confusion, limited functionality, technological dependence and technological contagion, and systemic collapse. A separate digital risk template can also be constructed which examines in further detail the nature of technology risk, information risk, data risk, knowledge risk, and archive risk. Both of these risk templates can be applied in the AI technology specific arena.

A more specific parallel AI advantage template can then be developed based on the FinTech taxonomy referred to. The general FinTech advantages referred to are equally applicable in relation to AI and SI. Advanced technology (TechTech) and AI benefits may include increased sophistication, massive data volume handling, speed of processing, data accuracy and verification, originality and creativity, continuing innovation and advance, speed of advance, recursion, copying and replication, connectivity, continuity, and universality across all commercial, industrial, and social sector areas. Corresponding advantages may also arise with regard to business applications (BusTech or FirmTech), users (UserTech), markets (MarketTech), regulation (RegTech), infrastructure (InfraTech), government and official policy (GovTech), and financial and social stability (SocialTech). Many of these benefits will arise from technology more generally and with regard to AI uses and applications more specifically.

A series of corresponding risks and challenges also arise. These include digital data abuse, digital bias and discrimination, digital screening and surveillance, digital access, inequality and divide, digital security and instability, social and community impact, unemployment and welfare effects, possible genomic and genetic protection genetically modified foods (GMFs), and wider employment, economic, and environmental impacts. These are often referred to as ethical risks in relation to AI although this may confuse and ignore the full range of impacts that may arise and challenges that have to be considered in practice.

AI risks can again be examined in further detail in terms of the eight part taxonomy constructed. AI technology risk may specifically be summarised in terms of amorality and the absence of human values. More specific exposures could then be identified in terms of policy conflicts, policy gaming with sequential planning, proxy (power) extensions, situational awareness and false reporting, hallucinations (fabrications and with false verifications), drift (producing different responses at different times to the same prompts), covert operations and collusion, lack of transparency and explainability, toxicity (use of offensive language or references), information explosion and recursion, self-replication and new code, and entity (AI or robotic) origination. A series of further more specific risks or threats can also be developed with regard to the other seven perspectives of business and market disruption, users and data control, markets and dominance, law, regulation and compliance, infrastructure and integration, government or official policy conflict and failure, and financial and social instability or collapse. All forms of endogenous as well as wider exogenous and potentially existential also have to be considered. Existential threats can be referred to as “eRisk” in this paper.

AI can generate massive benefit although it can also create significant risk and vulnerability. All of these potential threats and exposures have to be considered and responded to on an integrated, aggregate, and complete basis. Any new AI solutions will have to manage all of these potential emergent risks, challenges, and exposures.

V. Machine Learning & Deep Learning

The most specific recent advances have occurred in the areas of Artificial Neural Network (ANNs) and Deep Neural Networks (DNNs) and with the construction of large language models (LLMs). ANNs and DNNs have significant possible applications especially in the areas of image recognition, image restoration, natural language processing (NLP), visual and audio processing, bioinformatics, medical image analysis, drug discovery and toxicology, bioengineering bionics, client and customer support, chatbots, financial analysis and automated trading, smart contracts and regulatory technology (RegTech), and compliance including money laundering and fraud protection.

Deep learning uses multiple ANNs. McCulloch and Pitts originally referred to the development of computational neural models in 1943 with this being taken forward by Donald Hebb and the introduction of the Hebbian “theory of reinforcing firing cells (summarised in terms of “cells that fire together wire together”).” A mark I “Perceptron” was constructed by Frank Rosenblatt at Cornell University producing linear outcomes with the work supported by the US Office of Naval Research. The limitations of the early technology was set out by Marvin Minsky in a paper on Perceptrons in 1969.

Further progress was possible through the development of the earlier idea of backpropagation and with gradient descent to construct modern neural models. Kunihiko Fukushima built neural networks with multiple pooling and convolutional layers (intersecting functions) in 1979 with the “Neocognitron” network. This work was assisted through significant advances in Metal Oxide Semiconductor (MOS) transistors and the development of Very Large-Scale Integration (VLSI) integrated circuit capabilities during the 1980s and 1990s as well as later Graphics Processing Units (GPUs). The most successful deep neural network companies include Google, IBM, Intel, Microsoft, Qualcomm, OpenAI, NeuralWare, Starmind, Neurala, and Clarifai. Deep neural network market size was expected to reach $5.98b by 2027 with a 21.4 percent CAGR.

Large Language Model (LLMs) use accelerators and can process substantial amounts of text data following training with much of this extracted (scraped) from the Internet with concerns arising that this generally without prior permission. An accelerator is a high-performance parallel learning computational device that can operate through large Wafer-Scale Engine (WSE) chips or separate Graphics Processing Units (GPUs), Multicore Scalar Processors (MSPs) or spatial accelerators (such as Tensor Processing Units or TPUs). LLMs can be used for Natural Language Processing (NLP), answer response, and language translation. LLMs operate by using an input text to create a token from which they predict the next token (word) on a scaled probability basis. LLMs are trained to produce the desired range of outputs using, for example, back propagation and gradient descent as noted.

Different types of machine learning processes may be distinguished, including supervised learning, unsupervised learning, semi-supervised learning, reinforced learning, and deep learning. Distinct types of Spiking Neural Networks (SNNs) and Large Language Model (LLMs) can also be referred to.

A. Supervised Learning

Supervised learning involves the processing of inputs provided by an “instructor,” or “teacher,” using pre-determined systems, or maps, to generate outputs. Supervised learning applies labelled data sets to train algorithms to carry out the classification function with adjustments and cross-validation. Data is provided in the form of training data, or supervisory signals, with arrays, or vectors, and matrices. The system will determine the appropriate output having been trained using the input functions. Supervised systems include active learning (using data labels and optimal experimental design), classification learning (placing observations within categories or sub-populations), and regression learning (using dependent outcome or response labels within a numerical range) as well as similarity learning (using a similarity function to rank objects). Supervised learning can be used for image and object recognition, predictive analytics, customer sentiment analysis, and spam detection.

B. Unsupervised Learning

Unsupervised learning detects patterns in unlabelled data using unclassified, or not-categorised, inputs. Commonalities are identified which allow clustering and association. Clustering algorithms create structures or patterns within raw data, which include exclusive, overlapping or hierarchical, and probabilistic algorithms. Association uses rules to create relationships between variables. Unsupervised learning can be used for news production, computer revision, medical imaging, anomaly detection, customer persona definition, and production recommendation sales engines. Unsupervised learning can apply dimensionality reduction which reduces high volume data input sets to smaller sizes possibly in pre-processing with data integrity being protected. This may use principal component analysis (PCA), which reduces dimensionality within a data set.

C. Semi-Supervised Learning

Semi-supervised learning combines limited data training with unlabelled data searching. This can be more efficient especially in using high data volumes.

D. Reinforcement Learning

Reinforcement learning (RL) applies cumulative rewards to structure intelligent agent action. This creates incentives for the agent to adopt an optimal policy that generates the maximum reward function. The difference between optimal and sub-optimal conduct is referred to as the “regret.” Reinforcement learning uses a Markov Decision Process (MDP) which models decision taking. Reinforcement learning is applied in autonomous driving vehicles and for gameplay, including backgammon, checkers, and Go.

E. Deep Learning

Deep learning uses successive layering to improve processing. Deep learning is generally based on ANNs, including Convolutional Neural Networks (CNNs) used in image recognition and processing. Convolution involves the production of a third function derived from two other functions with production as an integral of the product of the first two functions. ANNs and CNNs generally use an input layer and output layer with a series of intermediate hidden layers depending upon the depth of the model. Convolution in a CNN is carried out within the hidden layers resulting in the convolution being passed on to the higher layers. Deep learning is based on neural layer construction, test sampling, cost (error) function identification, back propagation, gradient decent and final adjustment.

F. Spiking Neural Networks (SNNs) & Large Language Model (LLMs)

Machine learning generally uses Spiking Neural Networks (SNNs), which are artificial neural networks that replicate neural synaptic conditions where neurons only fire on receipt of an action or membrane potential. This avoids continuous firing as with perceptron networks. Other more specific types of algorithms can be used. These include Convolutional Neural Networks (CNNs), Long Short Term Memory Networks (LSTMs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), Radical Basis Function Networks (RBFNs), Multiplayer Perceptrons (MLPs), Self-Organising Maps (SOMs), Restricted Boltzmann Machines (RBMs), Deep Belief Networks (DBNs), and AutoEncoders (AENs). A large number of different deep learning programmes have been developed over time.

G. Technology and Legal Limitations

While substantial technological advancements have occurred, significant difficulties still arise, especially with DNNs and LLMs and other forms of Generative A.I. (GenAI). TechTech advantage and disadvantage has already been referred to. Specific issues with DNNs and LLMs can be summarised as including Hallucinations (where the systems produce fictional facts including on relevant laws), Model Bias (inherent distortions due to unforeseen data input effects), Model Infection (corruption), Model Drift (more recent models encounter data that they were not trained to manage), Response Drift (different answers are giving to the same question at different times), New Version Inefficiency or Error (with the new code versions being more error prone than earlier copies), Training Opacity (non-disclosure of training data used), Interpretability (lack of transparency and explainability), Noise (distortion), Data Breach (copyright infringement), Data Dependence (with all results being dependent on the underlying training data input) and Data Exhaustion (due to only a limited supply of sufficient quality training data being available). Issues also remain with regard to data integrity and security, cyber threats and attack and cost.

A series of further wider ethical issues also arise as with regard to, for example, digital screening, digital surveillance, digital divide, digital security, social and community impact, genetically modified foods (GMFs), genomic and genetic protection, and wider employment, economic and environmental impacts.

A series of further core legal issues also arise such as with regard to the nature of digital rights, digital information and data, digital property, digital contract, digital access and internet access, digital public law including the cross-border enforcement of public laws, digital infrastructure, digital regulation, digital dispute resolution, digital International Private Law (IPL) and digital Public International Law (PIL). Continuing specific problems arise in terms of digital privacy, personal data protection rights, parallel freedom of information entitlements, copyright and intellectual property protection as noted, computer rights protection and digital service rights and monopoly and competition law abuse. All of this has to be considered in the design and construction of any complete and effective AI response.

VI. Artificial Intelligence Companies

A large number of major companies and innovative new developers have entered the A.I. field. Two of the most important developers in this area are DeepMind, a subsidiary of Google, and OpenAI, principally funded by Microsoft Elon Musk also set up x.AI in July 2023 to compete with DeepMind and OpenAI. Additionally, significant A.I. work is being carried out inter alia by IBM, Meta, Palantir, Anthropic, Inflection A.I., Adept, Stability A.I. Alphabet, NVIDIA, Tesla, Mobileye, Palantir, Dynatrace, Uipath, C3AI, SentinelOne, Upstart, Bayonet AI, Darktrace, and Aurora Innovation.

A. DeepMind

DeepMind is a London based neural development platform that was set up on September 23, 2010 by Demis Hassabis, Shane Legg and Mustafa Suleynan. DeepMind’s objective was to produce an interdisciplinary approach to A.I. research based on machine learning, neural science, engineering, mathematics, simulation and computing infrastructure to build powerful general purpose learning algorithms that can perform in complex environments. Additionally, DeepMind has a long-termobjective of producing AGI. DeepMind attempts to translate cutting-edge machine learning into real world applications to solve some of the world’s most difficult problems.

DeepMind’s early programs learned to play 49 different Atari video games (only using pixels and scores data) with AlphaZero being able to outplay humans at chess, Japanese chess (shogi) and Chinese Go. AlphaGo beat the world professional champion Go player, Lee Sedol, in 2016 on a 4 out of 5 game play basis. DeepMind’s AlphaFold was able to unlock protein folding in 2018 and 2020.

DeepMind’s other leading products include AlphaStar (to play StarCraft II), WaveNet and WaveRNN (text to speech delivery), AlphaCode (computer code production), GATO (multifunctional neural network), Sparrow (Chatbot text or text-to-speech), Chinchilla AI (large language model), and Sonnet (high level library for use with Google’s TensorFlow). DeepMind has also been developing the Gemini multimodal LLM which would compete with OpenAI’s ChatGPT.

DeepMind maintains a DeepMind Ethics & Society (DMES) initiative to promote research and ensure that A.I. works for all. This examines issues such as privacy, transparency and fairness, A.I. morality and values, governance and accountability, global complex challenges, misuse and unintended consequences and economic impact with inclusion and equality.

B. OpenAI

OpenAI is an American A.I. research and development company headquartered in San Fransisco. OpenAI was established by a number of parties, including Sam Altman, Trevor Blackwell, Greg Brockman, Elon Musk and Peter Tothiel on December 11, 2015. OpenAI was set up with a pledge of $1 billion from various parties, including Elon Musk. OpenAI operates through a non-profit, OpenAI, Inc., and a for-profit subsidiary, OpenAI Limited Partnership. Brockman and Yoshua Bengio sought to hire the leading researchers in the area to establish new open source collaboration, which ultimately resulted in Musk and Altman announcing OpenAI at the end of an AI Conference in Montreal, Canada in December 2015. Altman would become involved with other innovative start-ups, including in the areas of eye (iris) identification and cryptocurrencies (Worldcoin), human life extensions (Retro Biosciences), nuclear fission (Oklo) and nuclear fusion (Helion). Altman was later removed as CEO by the four party OpenAI board on 17 November 2023 due to perceived communications issues and Brockman resigned as chairman on the same day although Sam Altman was subsequently reinstated on 21 November 2023.

Musk had resigned from the board of OpenAI due to potential conflicts of interest in 2018. In 2019, OpenAI switched from a non-profit status to include a capped for-profit operation with profits being limited to 100 times investment and allowed OpenAI to license technologies commercially. Microsoft’s original investment increased to ten billion pounds in January 2023 with it reporting that Microsoft would receive seventy-five percent of the company’s profits until its investment was repaid through a forty-nine percent stake in OpenAI. OpenAI has been valued at eighty-six billion dollars. Elon Musk subsequently commenced an action against Sam Altman and OpenAI on 29 February 2024 for breach of contract in undermining the original objectives and mission of OpenAI to make the benefits of AI available for the benefit of humanity on a non-proprietary basis.

OpenAI has been able to develop a number of products and applications including Gym (general intelligence benchmark), RoboSumo (robotic mobility training), Debate Game (debate training), Dactyl( robotic hand manipulation), GPT (generative pre-training language model), GPT-2, GPT-3 and Chat Generative Pre-trained transformer (ChatGPT) as well as MuseNet (music composition), Whisper (general purpose speech recognition model), API (application programming Interface access model), DALL-E and CLIP (image generation), Microscope (neuron visualisation model) and Codex (code generation).

ChatGPT was a GPT-3.5 based chatbot language model brought to market in November 2022 based on supervised and reinforcement learning. ChatGPT consists of a “Chat Generative Pre-trained Transformer.” ChatGPT uses advanced natural language processing (NLP) to produce human targeted conversational question responses to prompts with the system also producing articles, fictional stories, poems and computer code. GPT 3 has been used to support the development of DALL-E (with the creation of images from text), CLIP (connecting text and images) and Whisper (multi-lingual voice-to-text) as well as the ChatGPT chatbot. Chat GPT-4 was released on March 14, 2023 and is multimodal, combining text, video and audio modalities, and was trained on forty-five gigabytes of data as opposed to GPT-3’s seventeen gigabytes. An even more powerful GPT-5 was expected to be released by OpenAI in December 2023. It was separately claimed that OpenAI had been working on an even more advanced model, referred to as Q*, that may approximate full AGI, with this forming part of the alleged reasons for Sam Altman’s removal from office at OpenAI in November 2023.

C. X.AI

x.AI was launched by Elon Musk on July 12, 2023 with the help of a number of other leading experts from the other A.I. and technology companies, including Igor Babuschkin from DeepMind and Dan Hendryks, Director of the Centre for AI Safety (CAIS) in San Fransciso. The goal of x.AI is to understand “the true nature of the universe.” The platform would compete with OpenAI with funds being raised from other investors including in Tesla, SpaceX and Twitter, which would be renamed X Corp. Elon Musk could use training data from X Corp (Twitter) and Telsa’s D1chip supercomputer, Dojo, which was designed to train Full Self-Driving (FSD) advanced driver assistance systems. Elon Musk has separately referred to creating a form of “TruthGPT” (later referred to as Grok), without controlled political correctness and with the development of advanced mathematical reasoning which was not available in other models. The system would be a “maximum truth-seeking AI” and “maximally curious” and assist in the search for intelligent life in the universe. Elon Musk has stated that, “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production” and that it has “the potential of civilization destruction.”

D. IBM

IBM had developed Watson which won the Jeopardy competition in 2010 and Truenorth in 2014. Watson is now available as watson.ai, which combines machine learning and generative A.I. IBM’s, Deep Blue, beat the world chess champion, Garry Kasparov, in New York City in 1997 after Kasparov earlier beat Deep Blue in Philadelphia in 1996. IBM has worked in such areas as drug, studies, molecule screening and mRNA with Moderna, as well as examined geospatial satellite data with Nasa. IBM has adopted a multi-disciplinary and multi-dimensional approach to AI ethics building on earlier foundation models and machine learning techniques. A range of input, output governance risks have been identified with a risk based approach proposed.

E. Meta

Meta has been developing A.I. products with its own LLM, referred to as LLaMA (Large Language Model Meta AI), released in February 2023 and LLaMA 2 released in July 2023. LLaMA used four underlying models with seven, thirteen, thirty-three and sixty-five billion parameters and with LLaMA 2 using three models with seven, thirteen, and seventy billion parameters. LLaMA was originally restricted to research and academic use with Meta announcing in July 2023 that it would make LLaMa available more widely on an open source basis. Meta President for Global Affairs, Nick Clegg. referred to openness as being “the best antidote to the fears surrounding AI.” Open source availability would increase use and customer adoption as well as encourage third party development and innovation. Meta has separately invested around ten million dollars in developing the Metaverse and, further, with the establishment of a separate generative AI unit, under Chris Cox, and the possible development of separate A.I. chatbots for individuals, advertisers and businesses across Instagram, WhatsApp and Facebook. Meta has been developing its own supercomputer referred to as the “AI Research SuperCluster (RSC).”

F. Palantir

Palantir was set up by German American entrepreneur, Peter Thiel, and others in 2003 in Denver, Colorado. Palantir refers to itself as being a world class software developer for data analytics and data driven decision making. Its principal products included Palantir Gotham (a commercially available AI operation system), Palantir Apollo (software deployment including for security and regulatory purposes) and Palantir Foundary (semantic, kinetic, and dynamic data activation and analytics) and now with AIP (activates LLMs and other A.I. applications on private networks).

G. Anthropic

Anthropic is a San Francisco based A.I. start-up company formed in 2021 by Daniela Amodei, Dario Amodei, Jack Clark and Jared Kaplan, of which, the latter two had previously been with OpenAI. Anthropic refers to itself as a safety and research company that is working to build reliable, interpretable and steerable A.I. systems. Anthropic raised US $1.5 billion with investors including Amazon and Google. Anthropic’s principal product is “Claude” which it describes as a next-generation A.I. assistant based on its research into training the most helpful, honest, and harmless A.I. systems. Claude can assist with summarisation, search, creative and collaborative writing, Q&A and coding with improved accessibility and safety and with controllable personality, tone, and behaviour.

H. Inflection A.I.

Inflection A.I. refers to itself as an A.I. studio and was established by Mustafa Suleyman, Reid Hoffman, and Karen Simonyan in Palo Alto, California in 2022. Inflection AI received $1.3 billion in investment in 2023. Mustafa Suleyman is a British A.I. engineer who was a co-founder of DeepMind and former head of applied A.I. at the firm working inter alai on DeepMind Health. Inflection’s principal success has been with the production of its chatbot “Pi” (“Personal intelligence”) which acts as an empathic enabled A.I. controlled personal assistant with improved interactivity and emotional reaction.

VII. Artificial Intelligence Comment and Debate

An increasingly large number of private and official papers have been published in the area of machine and artificial intelligence. Early papers simply recorded progress in the area with the papers either being more general in content, or non-specific, in nature. More recent papers have become increasingly alarmist, focusing on potential threats and existential consequences. This shift reflects the wider degree of confusion and often ill-informed speculation that has arisen in this area. Other documents consist of proposed codes of conduct, or ethics, to guide AI program design and development. Other proposals have begun to construct more detailed or outline regulatory responses, such as in the EU, UK and US. Nevertheless, many of the proposals lack legal contribution which makes them more simply aspirational in content and intent.

Recently, a number of papers have been published by private bodies in response to rapid developments in the artificial intelligence area. Some of the papers specifically followed the launch of ChatGPT-3.5 in 2022 and proposed launch of ChatGPT-4. These papers have highlighted specific threats and proposed various sets of recommendations for reform.

A. Training AI Systems Pause (March 2023)

A number of leading technology experts called for a pause of at least six months on the training of advanced A.I. systems more powerful than GPT-4 on 22 March 2023. The letter was signed by Elon Musk and other experts, such as Steve Wozniak, Co-founder of Apple, Yuval Noah Harari, Hebrew University of Jerusalem, Yoshua Bengio, University of Montreal, and Stuart Russell, University of Berkeley. The open letter refers to the Asilomar A.I. principles produced by the Future of Life Institute (FLI) at the Beneficial AI 2017 conference. Advanced A.I. was stated to represent “a profound change in the history of life on Earth” and should be planned for and managed with “commensurate care and resources” with this not occurring and A.I. labs consequently being “locked in an out-of-control race to develop and deploy ever more powerful digital minds” that no-one could “understand, predict, or reliably control.” “A.I. systems were becoming ‘human-competitive’ at general tasks with systems only to be developed where ‘their effects will be positive and their risks will be manageable.’” A.I. research and development had to be refocused on ensuring that systems were more “accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal” with “robust AI governance systems” being adopted. The pause of, at least, six months should be verifiable with governments instituting a moratorium where this was not given effect to.

A parallel set of policy recommendations was published by the Future of Life Institute in April 2023. The policy recommendations highlighted a series of issues concerning auditing, power capability, national agencies, liability, leakage, safety and verification. Robust independent auditing regimes had to be established for general purpose models where these could impact the rights or wellbeing of individuals, communities or societies with mandatory certification by accredited third party auditors. Organisations’ access to computational power had to be regulated to control abuse of specifically large language models (LLMs). Capable A.I. agencies had to be established at the national level, such as with the UK Office for Artificial Intelligence and EU A.I. Board. An appropriate liability framework had to be established for A.I. derived harms. Measures had to be implemented to prevent and track A.I. model leakage, such as with the leak of LLaMA at Meta in March 2023. A.I. models should also be watermarked through government mandates to prevent illegitimate distribution. Research funding had to be expanded to support technical A.I. safety standards developments especially in relation to alignment, robustness and assurance and explainability and interpretability. Necessary standards had to be developed to identify and manage A.I. generated content with other recommendations including through the use of “bot-or-not” disclosure requirements. These are referred to as “Bot Or Not Exposure” (BONE or “Bot Or Not Disclosure” (BOND) conditions for the purposes of this paper.

While the open letter was welcomed by some commentators, others questioned the short nature of the six month pause, inability to secure global adoption and lack of government support specially in terms of the recommended moratoria. Former Microsoft, CEO, Bill Gates questioned the call for an A.I. pause arguing that technology effects will not be as damaging as predicted. Other commentators thought that the open letter was simply designed to allow other competitors to catch up with OpenAI’s ChatGPT model.

B. Statement on AI Risk (May 2023)

The Centre for AI Safety (CAIS) published a Statement on AI Risk on May 30, 2023. The CAIS was founded in San Francisco in 2022 as a non-profit organisation to promote the safe development and deployment of A.I.. The Mission of the CAIS is to reduce societal scale risks from A.I. and to equip policymakers, business leaders and the wider world with the understanding and tools necessary to manage A.I. risk. CAIS research objectives included aligning A.I. with “Shared Human Values”. The CAIS produced a short succinct Statement on AI Risk in May 2023 stating that, “[m]itigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The statement was signed by a number of other notable signatories including Demis Hassabis, Google DeepMind, Sam Altman, OpenAI, Bill Gates, Gates Ventures, Max Tegmark, MIT Centre for A.I. and Fundamental Interactions, Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto and Yoshua Bengio.

The CAIS noted separately that A.I. can generate risks in the form of perpetuating bias, support autonomous weapons, promote misinformation and conduct cyber attacks with A.I. agents increasingly being able to act autonomously to cause harm. A.I. could create catastrophic or existential risk (eRisk in this paper). Specific difficulties could arise with automation, deception and power acquisition and the creation of rogue A.I. systems. The CAIS specifically identifies eight material A.I. risks in terms of malicious actors and weaponisation, misinformation, proxy gaming (faulty objectives training), enfeeblement (loss of self-governance and machine dependence), value lock-in (concentration of power), emergent goals (loss of control through the development of unpredicted capabilities or goals with systems), deception (within A.I. systems) and power seeking behaviour (agency systems acquiring power which decreases control).

C. AGI and Beyond (Feb 2023)

Research and collaborative work has been advanced by OpenAI, which was originally founded by Elon Musk, Sam Altman, and others in December 2015. OpenAI CEO, Sam Altman, published a paper on Planning for AGI and Beyond in February 2023. AGI was defined as any AI system that is generally smarter than humans with the mission of OpenAI being to ensure that AGI benefits all humanity. AGI could elevate humanity by increasing abundance, improving (“turbocharging”) the global economy and supporting the discovery of new scientific knowledge. OpenAI intended to use AGI to empower humanity “to maximally flourish in the universe,” ensure that the benefits of access to AGI were “widely and fairly shared” and successfully to “navigate massive risks” including through the deployment of less powerful technology versions first to minimise “one shot to get it right” scenarios with otherwise damaging consequences.

OpenAI would release more powerful systems on a gradual transition basis to promote understanding and experience and to allow society and A.I. to co-evolve without planning in a vacuum. Democratized access would be provided, although OpenAI would become increasingly cautious the closer systems approached AGI and specifically to avoid disruption by malicious actors. OpenAI would produce increasingly “aligned and steerable models” that would promote discretion although product “default settings” would be constrained. OpenAI would separately promote a global conversation on how to govern systems, distribute their benefits, and secure fair and shared access. The OpenAI Charter requires it to assist other organizations “to advance safety instead of racing with them in late-stage AGI development.” OpenAI promoted independent auditing.

Sam Altman considered that the first AGI would only be one point “along the continuum of intelligence” with further progress expected from there. AGI could develop rapidly with a slower transition allowing more careful reaction. The most important project in human history was possibly the transition to superintelligence with boundless downside and upside risks. Humanity could flourish to an extent that was impossible to foresee at this stage.

OpenAI has separately warned that superintelligence could lead to “the disempowerment of humanity or human extinction” in a paper on Superalignment in July 2023. The paper notes that “scientific and technical breakthroughs” were necessary “to steer and control AI systems much smarter than us.” OpenAI would set up a research team to attempt to control superintelligence within four years, in particular, through the training of AI systems to monitor other AI systems and with twenty percent of the OpenAI compute being dedicated to this effort. Superintelligence would be “the most impactful technology humanity has ever invented,” which “could help us solve many of the world’s most important problems” although “the vast power of superintelligence could also be very dangerous” and lead to “the disempowerment of humanity or even human extinction.” Current alignment techniques, such as reinforcement learning through human feedback, would always be inadequate with A.I. systems being “much smarter than us” and not able to “scale to superintelligence.” In response, OpenAI would build a human-level “automated alignment researcher” and develop a scalable training method, validate the resulting model, and stress test its alignment pipeline.

A separate paper was published on Our approach to alignment research, which was based on three pillars: training A.I. systems using human feedback, training to assist human evaluation, and training to carry out alignment research. Nevertheless, it was accepted that a number of limitations arose with major underinvestment in robustness and interpretability research, the danger of scaling up and amplifying inconsistencies, biases and vulnerabilities, the need to solve different problems with AGI, engineering a scalable training signal and the dangers of using defective models. This work continues.

D. Ethical and Social Risk (Dec 2021)

DeepMind issued a paper on Ethical and Social Risks with LLMs in December 21. The paper identified six core sets of exposure with twenty-one risks in total. The core threats related to: “(a) discrimination, exclusion and toxicity; (b) information harms; (c) misinformation harms; (d) malicious uses; (e) human computer interaction harms; and (f) automation, access and environmental arms.” Sources of origin are discussed, and potential risk mitigation approaches outlined. Mitigation devices included technology, research and design, cooperation, management, and social and public responses. The paper highlights the importance of collaboration between all relevant parties and the difficulties of setting appropriate benchmarks to measure adherence, or failure to adhere, to the targets set.

DeepMind and other private institutions and universities produced a subsequent paper on Model Evaluation for Extreme Risks in May 2023. The paper highlights the importance of model evaluation in dealing with extreme risks, such as offensive cyber capabilities and strong manipulation skills. This would apply to operations of design (capability) evaluations and objectives or intent (alignment) evaluations. This would then cover structural risks and model incompetence. Models would be treated as highly dangerous if they had “a capability profile that would be sufficient for extreme harm, assuming misuse and/or misalignment.” Nine dangerous capabilities are identified.

The principal recommendations are based on embedded governance processes including responsible training, deployment, transparency, and appropriate security. A workflow for training and deploying model evaluation and embedding extreme risk results within key safety and governance processes is provided based on responsible training and responsible deployment. Protection has initially to be developed through managed training, either using previous training runs or experimental models that can be evaluated including through scaling (or inverse scaling) analysis. Responses can be dealt with through further examination, training method adjustment, and rescaling. Responsible deployment should be subject to the conduct of a “Deployment Risk Assessment” (DRA) with evaluations being conducted pre-and-post-deployment. Deployment safety may be impacted by scale, use restrictions, generality, autonomy, tool use, depth of model access, oversight and moderation, global planning, and model adjustments. External transparency measures specifically include incident reporting, sharing pre-deployment risk assessments, scientific reporting, and educational demonstration. Appropriate security controls can be implemented through “red teaming” (with adversarial challenging), monitoring, isolation, rapid response, and systems integrity. Evaluations should be comprehensive, interpretable and safe.

Limitations may still arise with regard to factors beyond the A.I. system, unknown threat models, difficult to identify properties, emergence, evaluation of ecosystem maturity, and over trust. Further, difficulties may also arise with regard to advancing and proliferating dangerous capabilities, competitive pressures, making superficial improvements to model safety and harms arising during the course of the evaluation. A.I. developers should invest in research, craft internal policies, support outside work, and educate policymakers, while policymakers can systemically track the development of dangerous capabilities, invest in relevant ecosystems, require the conduct of external audits and embed extreme risk valuations into A.I. deployment regulation.

E. Ethics of Artificial Intelligence (March 2020)

The European Parliament examined the Ethics of AI in 2020. The report adopts the European Commission’s definition of A.I. which refers “to systems that display intelligent behaviour by analysing their environment and taking actions - with some degree of autonomy - to achieve specific goals.” The report examines the impact of A.I. on society (including the labor market, inequality, privacy, human rights and dignity, bias and democracy), human psychology (relationships and personhood), the financial system, the legal system (criminal law and tort law), the environment and the planet (use of natural resources, pollution and waste and energy concerns) and trust (fairness, transparency, accountability and control). Ethical initiatives are reviewed with specific reference to international ethical proposals and measures.

F. Industry and Academic Comment

The comments and projections of a number of key industry and technology commentators have already been referred to. Various other contributions have been made to this debate. PayPal, CEO, Dan Schulman, for example, considered that business leaders are “likely underestimating” the impact of generative A.I. with A.I. possibly producing thirty percent to forty percent productivity improvements including on code development with of all this requiring rigorous testing and for A.I. to be applied in a controlled and responsible manner. Other commentators consider that A.I. is not expected to achieve more humanlike cognition abilities unless it is connected through robots and designed with evolutionary principles.

Theoretical physicist, Michio Kaku, claims that while A.I. can process vast amounts of data from which impressive predictions may be possible, A.I. lacks a fundamental understanding of “truth” with analysis being limited to pattern identification and probabilities. A.I. does not possess human intuition which allows them to assess the validity and reliability of information and to discern between truth and falsity. Kaku describes chatbots as just “glorified tape recorders” and considers that the forthcoming emergence of quantum computing will be of much more significance than A.I. alone with quantum systems based on qbits, superposition and entanglement.

An open letter in support of A.I. was produced by the BCS (British Computer Society) and Chartered Institute for IT (CIIT) in July 2023. The letter stated that A.I. was “not an existential threat to humanity” and “will be a transformative force for good” if the correct “critical decisions about its development and use” are adopted with the UK assisting “lead the way in setting professional and technical standards in AI roles, supported by a robust code of conduct, international collaboration and fully resourced regulation.” This has been summarized in terms of the slogan, “Coded in Britain,” which could become “a global byword for high-quality, ethical, inclusive AI.”

Yuval Harari discuses A.I. in terms of “Alien Intelligence,” as noted, due to A.I. being able to manipulate humankind’s “storytelling capability and control of communication and language, such as through disinformation and fake intimacy.” A.I. was described as having “hacked the operating system of our civilization” while humanity approached “the end of human history” but not “the end of history, just the end of its human-dominated part.” Harari notes that, “[l]anguage is the stuff almost all human culture is made of” and asks what “would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures?” Harari argues that, “[w]e now have to grapple with a new weapon of mass destruction that can annihilate our mental and social world” and that, “[w]e can still regulate the new AI tools, but we must act quickly.” Harari’s argument is essential to protect democracy itself which is “a conversation, and conversations rely on language” with A.I. possibly destroying “our ability to have meaningful conversations, thereby destroying democracy.”

VIII. Artificial Intelligence Codes

A large number of codes of conduct or guidelines have been adopted in the A.I. and robotics areas in response to these threats and challenges. The guidelines contain different formulations of the principal more general provisions and principles that should apply to new systems. For example, AlgorithmWatch prepared a Global Inventory of Ethics Guidelines list which attempted to identify all of the principal principles that apply to Automated Decision Making (ADM) and Artificial Intelligence (A.I.) which included around 167 guidelines.

These codes attempt to develop commonly agreed sets of guidelines and principles based on wider identifiable ethical and social principles. These attempt to integrate scientific and social concerns within development processes to avoid unnecessary and irresponsible commercialisation and risk generation. The difficulty that arises is that these codes are essentially merely aspirational in content and intent, and have no specific legal basis with no application or enforcement mechanisms. These are still of value in creating a larger cultural framework against which A.I. and other technological development can be assessed.

A series of 23 AI Asilomar Principles were, for example, developed at the Future of Life Institute (FLI) Beneficial A.I. Conference in January 2017. The FLI was established in March 2014 by Swedish American physicist, Max Tegmark, and Estonian billionaire, Jaan Tallinn, in Cambridge, Massachusetts, US with funding from Elon Musk. The FLI is supported by a number of other leading physicists, entrepreneurs and media figures. Its objective is to reduce global catastrophic and existential risks with a specific focus on A.I., Bio Tech, nuclear weapons and climate change. The Conference considered the American professor of biochemistry and science fiction writer, Isaac Asimov’s, “Three Laws of Robotics” and certain other available codes at that time. Twenty three new A.I. Asilomar principles were developed based on research issues (1-5), ethics and values (6-18) and longer terms issues (19-23). The FLI maintains an adherence signature page.

The “Institute of Electrical and Electronics Engineers” (IEEE) in the US established a Global Initiative on Ethics of Autonomous and Intelligent Systems (GIEAIS) which produced eight general principles within the first edition of its Ethically Aligned Design principles. The Mission of the IEEE Global Initiative was, “[t]o ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.” The U.S. Computing Community Consortium (CCC) and the Association for the Advancement of Artificial Intelligence (AAAI) produced a separate proposed 20 Year AI Roadmap in 2019. A set of Ten Commandments of Computer Ethics were also produced by the Computer Ethics Institute in Washington, D.C. in 1992.

All of these codes and sets of principles are all of value, in particular, in establishing legitimate targets, standards, and objectives. The codes are all, nevertheless, essentially limited being only aspirational in purpose and effect. They do not reflect underlying legal rights and obligations which are otherwise left unprotected and always subject to separate action in each area. Further, the codes are non-binding in nature, which also makes them incapable of direct compliance, enforcement, and sanction. Any substantial effective longer term solution will have to incorporate a significant legal component which reflects existing or new rights and obligations, and be capable of the full adoption, implementation, and compliance supported by necessary oversight and sanction. This can be achieved without unnecessarily restricting innovation and development.

IX. Artificial Intelligence International and National Response

A number of more formal initiatives on A.I. have begun to have been adopted at the international and domestic level. These adopt different approaches and technical methodologies depending upon underlying political and policy sentiment. Some of the most substantial proposal and developments are referred to below.

A. European Union

A.I. within the European Union (EU) can be considered in terms of the earlier work on machinery and later EU draft A.I. Regulation based on the objective of ensuring trustworthy A.I. A.I. and robotics systems have to be subject to appropriate health and safety standards to the extent that they constitute a form of machinery. Machinery is defined in terms of moveable machines for the purposes of this paper. A comprehensive health and safety regime is applied with regard to machinery within the EU under the Machinery Directive 2006/42/EC with this to be replaced by the proposed new EU Machinery Regulation. Machinery is generally defined within the EU in terms of an assembly with movable parts. A system for the free movement of machinery within the EU is created with a list of high risk machinery products, indicative list of safety components, and detailed list of essential health and safety requirements related to the design and construction of machinery products with machinery being subject to a conformity assessment, certification procedure, and affixed CE marking scheme with manufacturers also being required to produce a supporting technical document for machinery products. The essential health and safety conditions are based on certain general principles and more detailed requirements. The proposed Draft Machinery Regulation was adopted as part of the EU 2020 Commission Work Programme, A Europe fit for the Digital Age, Single Market Act, and New Legislative Framework (NLF).

Artificial physical agents (APAs) that constitute both robots with A.I. control systems will be subject to regulatory regimes applied both with regard to robots and A.I. This is confirmed in the EU Draft A.I. Regulation. Compliance may also be necessary with other more specific requirements such as in relation to electrics, electronics and telecommunications as well as possibly cybersecurity. The overall effect would be to create an integrated regime for AI and robotics (AIR) systems.

The EU issued a Draft AI Regulation in April 2021. The EU had produced a White Paper on AI in February 2020 which examined policy options in promoting AI and managing relevant risks. The European Commission had confirmed that it would bring forward legislation to manage the human and ethical implications of AI as part of its 2019–2024 political guidelines. AI had also been considered by the European Council in 2017 and by the European Parliament in 2020 and 2021.

The objective is to create a human centric approach that promotes confidence and trust in AI. The EU policy is claimed to be principle based and proportionate which responds to relevant risks without limiting technological development or innovation. The system is stated to be compatible with the EU Charter of Fundamental Rights especially with regard to data protection, consumer protection, non-discrimination and gender equality as well as with the EU GDPR and Law Enforcement Directive. The proposal was adopted within the New Legislative Framework (NLF) for products. This also supports other core initiatives including the EU Digital Decade, Data Governance Act, Open Data Directive and EU strategy for data. An appropriate governance system is set up using Member States Authorities and a new European Artificial Intelligence Board (EAIB). The proposal was prepared by a High Level Expert Group (HLEG) on A.I. set up in 2018 to implement the Commission’s Strategy on Artificial Intelligence which produced a set of Ethics Guidelines for Trustworthy AI in 2019. The Commission had adopted Option 3+ approach under the proposed A.I. Act following an impact assessment under its Better Regulation policy and Regulatory Scrutiny Board examination.

The A.I. Regulation applies to artificial intelligence systems which are defined as techniques and approaches that can generate specific outputs. The proposal adopts a risk based approach with four levels of Unacceptable risk, High risk, Limited risk and Minimal risk as well as real time remote biometric identification systems. Specific protections are imposed with regard to other high risk systems where the A.I. system is a safety component or in relation to certain other identified categories of systems. Specific obligations are imposed on high risk systems and on system providers and users. The regulation also contains a notification procedure with standards, conformity assessment and certificate registration procedures with transparency obligations and measures to promote innovation including through the establishment of an A.I. regulatory sandbox procedure. Regulatory sandboxes are to provide for a controlled environment to facilitate the development, testing and validation of innovative A.I. systems over limited time periods before market placement or service. An European Artificial Intelligence Board (EAIB) was to be established with appropriate competent authorities appointed in each Member State under the Regulation. An EU database is to be established to support high risk A.I. systems providers. Additional requirements are imposed on post-market monitoring, information sharing and market surveillance, the development of codes of conduct for non-high risk systems, confidentiality and penalties, delegation and final provisions. The effect is to create a significant although essentially only outline framework for new A.I. oversight and control within the EU. Certain core prohibitions are imposed with general obligations imposed on identified high risk systems and with other areas to be subject to a voluntary codes of conduct mechanism. The core regulatory framework was then based on general directions on the maintain of risk management systems, data and data governance, technical documentation provision, record keeping, transparency, accuracy, robustness and cybersecurity. It remains to be seen how these standards will be developed over time. The treatment of many core issues is omitted such as in relation to recursion, replication, covert operations, super-capacity and singularity as well as the creation of super AI networks and human interface and cyborg mechanisms.

The European Commission had produced a Coordinated Plan on Artificial Intelligence 2021 Review. The Coordinated Plan was stated to represent a joint commitment by the European Commission and Member States to maximise Europe’s potential in competing globally. The Commission, Member States and private actors were to accelerate investments in A.I. technologies to promote resilient economic and social recovery, act on A.I. strategies and programmes and align A.I. policy to remove fragmentation and respond to global challenges. The plan contained four key sets of proposals to set enabling conditions for A.I. development and the uptake of A.I. in the EU, make the EU a place for excellence “from the lab to the market,” ensure that A.I. works for people and is a force for good in society and build strategic leadership in high impact sectors.

The European Parliament’s negotiating position on the AI Act was adopted on 14 June 2023. This maintained a full ban on cognitive behavioural manipulation of people and vulnerable groups (such as through voice activated toys), social scoring (classifying people by behaviour, social-economic status of personal characteristics) and real-time and remote biometric identification (including facial recognition). Prohibited practices were extended to include intrusive and discriminatory uses.

This would be subject to certain exemptions such as with “post” remote biometric identification after a delay to assist criminal prosecution and with court approval. High risk systems would include EU product safety legislation related items (including toys, aviation, cars, medical devices and risks) with eight further categories of systems to be registered on an EU database. Transparency requirements would be imposed on generative AI, such as ChatGPT models, including content disclosure, prevention of illegal content generation and summaries of copyrighted data used for training purposes being published. Limited risk AI systems would be subject to minimal transparency requirements to allow user informed decisions.

An Open Letter was sent by 160 executives from major companies in Europe on June 30, 2023 to the European Commission, the European Council and the European Parliament on the EU A.I. Act. This expressed “serious concerns about the proposed EU Artificial Intelligence (AI) Act” with the legislation possibly jeopardizing “Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing” and especially with regard to GenAI. An appropriate “transatlantic framework” had to be built as a priority with Europe having to remain part of the “technological avant-garde.”

The EU was separately to invest $240 million to test A.I. systems before they can be brought to market through common “Testing and Experimentation Facilities” (TEF). TEFs will allow complex digital technologies to be assessed using real world situations in physical and simulation environments. The facilities include a CitCom.ai TEF for smart cities and communities, a TEF-Health lab for healthcare, AI-Matters for manufacturing and robotics and AgrifoodTEF for agricultural production related platforms.

B. United Nations

The UN Secretary General produced a Roadmap for Digital Cooperation in June 2020 with 8 key areas for action. The Roadmap notes that digital technology does not exist in a vacuum and has enormous potential for positive change, although it can reinforce and magnify existing fault lines and aggravate economic and other inequalities. This refers to the eleven standards of responsible state behaviour in cyberspace produced by the UN in 2015 which lists five limiting character standards and six principles of good state practices and positive duties to support international security. A High-level Panel on Digital Cooperation was set up in July 2018 which produced five recommendations on “The Age of Digital Interdependence” in June 2019. The General Assembly established a new five year open-ended working group on security in 2020. The Roadmap examines the implementation work to give effect to these recommendations. Specific concerns arose regarding AI and the lack of representation and inclusiveness in global discussions, a lack of coordination and accessibility in AI related initiatives, and the need for additional capacity and expertise in public sectors. The UN produced a separate report on AI Actions in 2022 which reviewed the work of all of the principal international organisations and international financial institutions. A separate Resource Guide on AI Strategies was produced in 2021 which considered the nature of AI ethics and relevant International Strategy, as well as technical standards and national AI strategies with observations on the way forward.

Preparation of a draft text of recommendation on the Ethics of AI has been considered with the United Nations system and specifically by the UN Educational, Scientific, and Cultural Organisation (UNESCO). The UNESCO was requested to lead and facilitate international work on information society ethics at the World Summit on the Information Society in 2003 and 2004, with work being taken forward through the International Bioethics Committee (IBC) set up in 1993 and World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), set up in 1998 in coordination with the Intergovernmental Bioethics Committee (IGBC). The Director General of UNESCO was mandated to produce a standard setting instrument on the Ethics of AI in November 2019 with an Ad Hoc Expert Group (AHEG) being established to prepare a draft recommendation text with a working document being produced in April 2020. The UN Chief Executive’s Board for Coordination (CEB) adopted a strategic approach and roadmap for the development of AI in May 2019. Secretary General Antonio Guterres stated that AI must become a force for good, and UNESCO had earlier adopted a framework of Internet Universality and Human Rights, Openness, Accessibility, and Multi-stakeholder (ROAM) participation approach in 2015 with a network of UNESCO chairs and Category II Centres assisting in the development of relevant AI partnerships such as with the International Research Centre on Artificial Intelligence (IRCAI).

The AHEG April 2020 working document on AI Ethics considered alternative AI definitions including in the COMEST 2019 study on AI Ethics and EU White Paper on AI. The principles and policy recommendations produced were to be based on international human rights with the Internet Universality framework endorsed by the UNESCO General Conference in 2015 and by the High Level Panel on Digital Cooperation applying the Human Rights, Openness, Accessibility, and Multi-stakeholder participation (ROAM) principle. The foundational values would then be based on human rights and fundamental freedoms, inclusivity and non-discrimination (leaving no one behind) with sustainable development and environmental protection. Fifteen outlined principles were identified by the roundtable discussion on Recommendation 3C by the Secretary General’s High Level Panel on Digital Cooperation. COMEST also identified eight specific principles. It accepted that many available AI ethics principles were vague and difficult to implement with the AHEG objective being to move from high level statements to actionability. Ethics was to be considered with decisionmaking and design, avocation, and evaluation with capacity building, following the CEB of strategic approach and roadmap. The development of an ethical impact assessment (EIA) would also assist predict consequences, mitigate risk, avoid harmful consequences, facilitate participation, and deal with societal challenges. The AHEG produced an outline skeleton document. A summary of possible principles is provided in Annex 3 structured in terms of human rights, inclusiveness, flourishing, autonomy, explainability, transparency, awareness and literacy, responsibility, accountability, democracy, good governance, sustainability, safety and security, gender, age, privacy, solidarity, value of justice, holistic approach, trust, freedom, dignity, remediation and professionalism. Other policy actions are outlined including for adoption by the private sector. A list of relevant source documents is provided in Annex 5. A list of other documents concerned with AI ethical, legal and social implications is provided in Annex 6.

C. Organisation for Economic Cooperation & Development (OECD)

The OECD adopted five values-based principles on AI and made five recommendations for policy makers on AI in May 2019. The principles are based on: (a) inclusive growth, sustainable development and well-being; (b) human-centred values and fairness; (c) transparency and explainability; (d) robustness, security and safety; and (e) accountability. The recommendations focus on: (a) investing in AI R&D; (b) fostering a digital ecosystem for AI; (c) providing an enabling policy environment for AI; (d) building human capacity and preparing for labour market transition; and (e) promoting international cooperation for trustworthy AI. An AI system is defined as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” with AI systems “designed to operate with varying levels of autonomy.” AI knowledge is defined as “the skills and resources, such as data, code, algorithms, models, research, know-how, training programmes, governance, processes and best practices, required to understand and participate in the AI system lifecycle.” The AI system lifecycle has four phases: (1) design, data and model; (2) verification and validation; (3) deployment; and (4) operation and monitoring. The OECD has published a separate Framework for the Classification of AI Systems which classifies documents under five headings: (1) People & Planet; (2) Economic Context; (3) Data & Imput; (4) AI Model; and (5) Task & Output. The OECD has produced a separate Catalogue of Tools & Metrics for Trustworthy AI.

D. G20 AI Principles

The G20 Trade Ministers and Digital Economy Ministers met in Japan in June 2019 to discuss Trade and Digital Economy matters. A number of recommendations were made on the digital economy, including the need to adopt a human-centred future society, data free flow with trust, human-centred AI, governance innovation, security in the digital economy, SDGs and inclusion, and the way forward. Trade initiatives were concerned with promoting dialogue on trade developments, Market driven investment decisions, trade and investment contributing to sustainable and inclusive growth, WTO reform and bilateral and regional trade agreements, and the interface between the trade and digital Economy and the way ahead. On AI, the G20 would endeavour to provide an enabling environment for human-centred AI that promotes innovation and investment with a focus on digital entrepreneurship, research and development, scaling up of start-ups, and the adoption of AI by Micro, Small & Medium Enterprises (MSMEs). AI may promote economic growth, bring great benefits to society, and empower individuals although it may also result in societal challenges with transitions in the labor market, privacy, security, ethical issues, new digital divides, and the need for AI capacity building. G20 members would continue to promote the protection of privacy and personal data with a set of G20 AI Principles being provided for in the Annex based on Responsible Stewardship of Trustworthy AI and National policies and international cooperation for trustworthy AI.

F. China

The other most detailed set of regulatory provisions on AI have been produced in China with the Interim Measures for the Management of Generative Artificial Intelligence Services with draft provisions produced in April 2023 and final measures in July 2023, which would come into effect on August 15, 2023. It was reported the final measures were less severe than the original proposals. The measures were produced by the Cyberspace Administration of China (CAC). The provisions are specifically issued under the cyber security law, data, security law, personal information, protection law, and PRC law on the scientific and technological progress. The measures are made up of five chapters on general provisions: technology, development and governance, service, specifications, oversight, inspections, legal responsibility, and supplementary provisions.

The provision measures apply to the use of generative AI technologies to provide services to the public on the mainland in respect to the generation of text, images, audio, video, or other content. They do not apply in other situations where a public service is provided such as in relation to educational, research, public, cultural, or professional users. The underlying principle is referring to the placement of equal emphasis on the development of security or merging, the promotion of innovation and governance under the law, and employing effective measures to encourage innovation subject to tolerant and cautious graded management.

Generative AI services are subject to the laws and of provisions governing social morals, ethics, and morality, as well as five other sets of specific obligations. They must also uphold “Core Socialist Values.” Effective measures are to be maintained to prevent the creation of discrimination by race, ethnicity, faith, nationality, religion, sex, age, profession, or health in algorithm design, training data selection, model generation, optimization, and the provision of services Intellectual property rights are to be respected with ethics and commercial secrets with advantages in algorithms commentator in platforms not being used for monopolistic unfair competition purposes. The lawful rights and interests of others are to be respected, including in relation to physical and psychological well-being, and in the protection of image, reputation, intercom privacy, and personal info. Effective measures are to be employed to increase transparency for generative AI services, and also to increase the accuracy and reliability of generated content having regard to service type. These consist of valuable directions, although their implementation may suffer from generality in terms of compliance and enforcement.

The measures are to encourage the innovative application of generative AI technology across all industries and fields, and to generate exceptional content and optimise user scenarios as well as support the coordination of efforts through industry, associations, enterprises, education, research, institutions, public, cultural bodies, and other relevant professional bodies. Independent innovation is promoted in the development of basic technologies, including algorithms, framing, chips, and supporting software platforms, and in the promotion of infrastructure and training data source platforms. Specific conditions are imposed on pre-training, optimisation, training, and other training-data activities. Clear, specific, and feasible tagging rules must be used where manual tagging is conducted with data quality checks carried out, as well as necessary legal compliance training maintained for tagging personnel.

Service providers are responsible for online information content under relevant laws and must carry out all relevant information. Security obligations with service agreements are signed with users who register for generative AI services which clarify rates in application of both sets of parties. Providers are to clarify and disclose user groups, occasions, and uses of their services, guide scientific understanding, and employ effective tools to prevent minor users from becoming over-reliant or addicted. Providers must comply with relevant confidentiality obligations and cannot collect or retain unnecessary personal information, including in relation to user input or usage records, with such information not being made available to others. Image and video content is to be labeled under the provisions on the Administration of Deep Synthesis Internet Information Services.

Providers must provide safe and stable services throughout the course of a user’s normal usage. Measures must be taken by providers to uncover illegal content, including stopping generation, transmission, and removal. There are other measures such as model optimisation training to make necessary corrections and report to relevant departments. Providers must establish appropriate mechanisms for the handling of complaints and reports with easily accessible portals and relevant procedure disclosures with complaints promptly handled and appropriate responses made to the public.

Departments are to strengthen the management of generative AI services under the law, including in relation to internet, information, reform, development, education, science, technology, industry, information, public security, radio, television, press, and publication. Providers of services with public opinion properties or the capacity for social mobilisation are to carry out appropriate security assessments under relevant state provisions, including the filing and modification of cancelling of filings on algorithms under the Provisions of the Management of Algorithmic Recommendations in Internet Information Services. Users have a right to complain or report to relevant departments where generative AI services do not comply with relevant laws, advantage of regulations, or these measures. Providers are to cooperate with relevant departments in the carrying out of oversight inspections, including of sources, models, types, and tagging rules, and algorithm mechanisms for training and data, and necessary technical data and other support and assistance. Relevant bodies and personnel are subject to strict confidentiality obligations, including with regard to state secrets, commercial seekers, personal privacy, and personal information. The State Internet Information Department must notify relevant bodies to adopt technical measures where generative AI services provided from outside China do not comply with relevant laws, regulations, or measures. Breach of the interim measures are subject to penalties provided under the supporting regulatory provisions on which they are based.

Relevant definitions are provided, including of generative AI technology, service providers, and service users, and relevant permits are to be obtained. Clear advantage of payments are imposed. Foreign investment in generative AI services must comply with the law and is an image of regulations on foreign investment. The measures apply from August 15, 2023.

F. U.S. Strategy

A number of initiatives have been adopted in the U.S. in the area of AI. Former President Trump highlighted the importance of ensuring American leadership in the development of emerging technologies, including AI, which would make up the Industries of the Future in the State of the Union address on February 5, 2019. An executive order on AI was adopted by the Trump administration in February of 2019 with a supporting document produced on AI for the American people.

Strategic Plans were adopted in both 2016 and 2019. A 2023 update of the R&D Strategic Plan was issued in May 2023. The National Artificial Intelligence Initiative Act (NAIIA) was adopted in 2020 which established the National Artificial Intelligence Initiative (NAII) and provided for the creation of the National Artificial Intelligence Initiative Office (NAIIO). Other U.S. AI connected bodies include the Select Committee on AI (SCAI), the Machine Learning and AI Subcommittee (MLAI-SC), the Networking and Information Technology Research & Development (NITRD), the AI R&D Interagency Working Group (AI R&D IWG), the National AI Advisory Committee (NAIAC), the National AI Advisory Committee’s Subcommittee on Law Enforcement (NAIAC-LE), and the National Artificial Intelligence Research Resource Task Force (NAIRRTF).

President Biden referred to the need to address the risks created by AI before a panel on AI in San Francisco in June of 2023. The Biden Administration agreed on Friday, July 21, 2023 with a number of the principal technology companies including Amazon, Google, Meta, and Microsoft, as well as OpenAI, Anthropic, and Inflection, that they would adhere to a set of AI safety rules. This would include adopting such protections as watermarking AI generated content with systems being thoroughly tested, including by expert outside third parties, before the public release and information shared on risk reduction and investment in cyber security. Anthropic, Google, Microsoft and OpenAI subsequently announced on Wednesday, 26 July 2023, that they would establish a new Frontier Model Forum for the purpose of “ensuring the safe and responsible development of frontier AI models.” The forum would only consist of companies manufacturing large scale machine learning models exceeding the current capabilities of the most advanced systems. Four specific objectives were identified. The Senate held two AI Insight Forum meetings with over twenty market leaders in September and October 2023 lead by Senate Majority Leader, Chuck Schumer (D-NY) with Senators Mike Rounds (R-SD), Martin Heiinrich (D-NM) and Todd Young (R-Ind).

1. National Artificial Intelligence Initiative Act 2020

The NAIIA was adopted in 2020. This provided for the creation of the National Artificial Intelligence Initiative (NAII) with an National Artificial Intelligence Initiative Office (NAIIO) and National Artificial Intelligence Advisory Committee (NAIAC). An assistance programme was to be established to make funds available to a network of National AI Research Institutes. The AI activities of the Department of Commerce are specified. Provisions included on the promotion of AI research and education. A separate Department of Energy AI research programme of activities is provided for.

2. National AI Initiative (NAII)

The NAII was set up under the NAIIA 2020 to ensure continued US leadership in AI R&D, lead the world in the development and use of trustworthy AI systems in public and private sectors, prepare the US workforce for the integration of AI systems across all sectors of the economy and society, and coordinate ongoing AI research, development, and demonstration activities among several agencies such as the Department of Defence and the Intelligence Community. The NAIIO was to be established by the Director of the Office of Science and Technology Policy to carry out the responsibilities set out. The NAII is organised under six strategic pillars of Innovation, Advancing Trustworthy AI, Education and Training, Infrastructure, Applications, and International Cooperation. The NAII is to be supported by the NAIIO which acts as a central point of contact for technical and programme information exchange on AI related initiatives, conducts public outreach, and promotes access to relevant technologies, innovations, best practices, and expertise.

3. National AI R&D Strategic Plan

The Strategic Plan set out the major research challenges that arose in AI and the need to coordinate and focus federal R&D investment to ensure continued US leadership in the development and use of trustworthy AI systems, as well as prepare the current and future US workforce for the integration of AI systems across all sectors and coordinate ongoing AI activities across federal agencies. The Biden administration remained committed to advancing responsible AI systems that were ethical, trustworthy and safe and served the public good. The original seven strategies set out in the 2016 Strategic Plan continued with a new Strategy 8 (expanding public private partnerships to accelerate advances in AI) added under the 2019 Update and an additional Strategy 9 (establishing a principled and coordinated approach to international collaboration in AI research) inserted under the 2023 Update. Each of the Strategies were expanded the separate sections of the Strategic Plan. The law consists of five titles which establish the National AI Initiative (NAII), set up National AI Research Institutes, specify the Department of Commerce AI Activities with those of the National Science Foundation and the Department of Energy AI Research Program. AI is defined as “a machine based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” Machine learning is defined as “an application of artificial intelligence that is characterised by providing systems the ability to automatically learn and improve on the basis of data or experience, without being explicitly programmed.”

4. AI Executive Order on Safe, Secure and Trustworthy AI

President Biden issued an Executive Order on 30 October 2023 and in advance of the International Summit on AI held in the UK on 1–2 November 2023. The Order was intended to establish new standards for AI safety and security, protect privacy, advance equity and civil rights, support consumers and workers, and promote innovation and competition. Developers of the most powerful AI systems would be required to share safety test results and other critical information with the US Government. Standards, tools and test conditions would be produced to ensure that AI systems are safe, secure and trustworthy. Protection would be provided against the risks of using AI to engineer dangerous biological materials and AI enabled fraud and deception through standards and best practices to detect AI generated content and authenticate official content. An advanced cybersecurity programme would be constructed to develop AI tools to identify, locate and correct vulnerabilities in critical software. The National Security Council (NSC) and White House Chief of Staff would develop a National Security Memorandum to direct further action on AI and security. This established a substantial programme of key initiatives which would be developed further over time.

G. UK Programme

British regulatory agencies adopted a number of initiatives regarding AI. Early guidance on AI Ethics and Safety was produced in June 2019. A National AI Strategy was produced in September 2021. A Pro-innovation Approach to AI Regulation was produced in March 2023 which was followed by an AI Action Plan in July 2023. An AI Standards Hub was set up by the Department for Digital, Culture, Media & Sport (DDCMS) with the Alan Turing Institute and with the support of the British Standards Institution (BSI) and National Physical Laboratory (NPL).

The Financial Conduct Authority (FCA) has considered how regulation can be supported by digital innovation and AI with the creation of a new Digital Sandbox in 2023 after a pilot scheme in 2020. The Intellectual Property Office (IPO) issued a paper on AI and Intellectual Property, including copyright and patents, and has proposed creating a copyright exception for AI purposes. The Information Commissioner’s Office (ICO) has issued separate guidance on AI and data protection. The Government and the UK’s AI ecosystem had agreed a £1 billion AI Sector Deal to support the UK’s global position in AI with over £2.5 billion invested in AI since 2014. A further £250m would be invested through UK Research and Innovation (UKRI) in AI, quantum technologies and engineering biology. The UK was frequently ranked third globally after the US and China in terms of AI investment, innovation, and implementation.

AI and Data is included within the “Grand Challenges” incorporated into the Department for Business, Energy & Industrial Strategy (DBEIS) with an Ageing Society, Clean Growth and the Future of Mobility. This forms part of the “Industrial Strategy” produced by the DBEIS. This was then later withdrawn in March 2023 and built into the government’s “Plan for Growth” and related strategies. This set out the government’s initiatives to support economic growth through significant investment infrastructure, skills and innovation, and the promotion of growth in every part of the UK, transition to net zero, as well as support the Government’s vision for a “Global Britain.” AI was also included within the government’s “Science & Technology Framework” as one of the core critical technologies with engineering biology, future telecommunications, semiconductors and quantum technologies.

Over 1.3 million U.K. businesses would use AI by 2040 with spending expected to reach £200 billion. Further, 3,170 UK AI companies had generated £10.6 billion in AI related revenues by 2023. Additionally, 50,040 people were involved in AI roles with £18.8 billion in investment and £3.7 billion in Gross Value Added (GVA).

1. AI Ethics and Safety

The Government Central Digital and Data Office (CDDO) and the Office for Artificial Intelligence (OAI) released guidance on AI Ethics and Safety in June 2019. This was developed with the Alan Turing Institute’s public policy programme and supported the government’s “Data Ethics Framework”. The Alan Turing Institute had been set up in May 2018 to develop research, tools and techniques to assist innovation with data-intensive technologies and to improve the quality of people’s lives. The AI Guidance was a summary of the Turing Institute’s separate “AI Ethics and Safety framework.” The Turing AI Ethics and Safety principles are intended to act as end-to-end guidance in the design and implementation of algorithmic systems in the public sector. AI ethics is referred to as “a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technology.” Potential harms are identified in terms of bias and discrimination, denial of individual autonomy, resource and rights, non-transparent, unexplainable or unjustifiable outcomes, privacy invasions, isolation and disintegration of social connection and unreliable, unsafe or poor-quality outcomes. Three building blocks are identified in terms of “Support, Underwrite and Motivate” (SUM) Values, “Fairness, Accountability, Sustainability and Transparency” (FAST) Track Principles, and promoting a “Process-Based Governance” (PBG) framework to support the goals of ethical permissibility, fairness, trustworthiness and justifiability with this being implemented through a “Reflect, Act and Justify” (RAJ) approach.

2. National AI Strategy

The government’s state objective was to develop a “10 Year Vision” to make Britain a “global superpower in AI.” AI was described as being the fastest growing “deep” technology in the world based on significant scientific advances and engineering innovations and which require longer term development periods and capital investment before commercial application. The AI Strategy is based on three assumptions and three objectives. A summary of key actions was provided over a short term (three months), medium term (six months) and long term (over twelve months) period with a separate summary Vision and set of Impacts, Outcomes and Activities. AI was defined generally in terms of machine tasks with separate legislative reference under the National Security and Investment Act. An AI Council had been established in 2019 to provide expert advice to the Government and high-level leadership. The 10 Year Plan would operate with the government’s Plan for Growth and Innovation Strategy, Integrated Review, National Data Strategy, Plan for Digital Regulation, and forthcoming National Cyber Strategy, Digital Strategy (with the DCMS’s “Ten Tech Priorities”), New Defence AI Centre, National Security Technology Innovation Exchange (NSTIx) and Market Resilience Strategies.

3. Pro-innovation Approach to AI Regulation

DSIT and OAI published a Pro-innovation Approach to AI Regulation in March 2023 to support the Government’s goal of becoming a science and technology superpower by 2030. An initial AI Regulation Policy Paper was produced in July 2022. This set out the basis for the adoption of a pro-innovation framework based on four cross-sectoral principles with, with regulation being context-specific, pro-innovation and risk-based, coherent, proportionate, and adaptable. Challenges were summarised in terms of a lack of regulatory clarity, overlaps, inconsistency and approach gaps. AI is specifically defined in terms of two key characteristics of adaptivity (which makes it difficult to explain the intent or logic of AI systems outcomes) and autonomy (with the difficulty in defining responsibility outcomes). A separate AI Action Plan was published in July 2022 which provides an overview of all of the more specific initiatives undertaken under the National AI Strategy.

The March 2023 policy paper would adopt “a common-sense, outcomes-oriented approach” focusing on the delivery of “the priorities of people across the UK” through better “public services, higher quality jobs and opportunities to learn the skills that will power” the future. Regulations have had a key role in establishing an environment within which AI could flourish, with the UK now attempting to lead the international conversation on AI governance and the value of adopting a pragmatic and proportionate regulatory approach. AI risks were noted. AI was referred to as a general-purpose technology that can fall within different regulatory remits with the need for cross-cutting principles based on AI regulatory framework to promote confidence and innovation. This was supported by Sir Patrick Vallance, Government Chief Scientific Advisor’s (CSA), review of pro-innovation regulation for digital technologies.

The government would adopt a “deliberately agile and iterative approach” with a “pragmatic and proportionate approach” that would “learn from experience and continuously adapt to develop the best possible regulatory regime.” Five key principles would support the framework:

  • (a) safety, security and robustness;
  • (b) appropriate transparency and explainability;
  • (c) fairness;
  • (d) accountability and governance; and
  • (e) contestability and redress.

The principles would be issued on a non-statutory rather than legislative basis and be implemented through regulatory domain-specific expertise. A statutory duty for authorities to have due regard to the principles would be introduced consequently following an initial implementation period. A number of central support functions would be managed by the government through the establishment of a coordination layer as provided for under the July 2022 AI Regulation policy paper. No new AI regulatory authority would be created directly, at least at this stage.

This non-legislative framework would be supported by other initiatives including assurance techniques, voluntary guidance, and technical standards developed in collaboration with such partners as the UK AI Standards Hub. Interoperable methods to incentivise responsible AI design, development and application of which would be promoted through international partners with support being provided to UK businesses to capitalise on global market developments and protect UK citizens from cross-border harm.

The central characteristics of the new regime were that it would be pro-innovation, proportionate, trustworthy, adaptable, clear, and collaborative, with the framework based on four key elements of definition, context specific approach, cross-sector principles, and central functions. The Strategy would be based on four elements of: (a) Objectives (Drive growth and prosperity, increase public trust and promote the UK as a global AI leader); (b) Framework Characteristics (Pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative); (c) Framework Design (Cross-cutting principles implemented by existing regulators with centralized support and coordination); and (d) Implementation (Proportionate and adaptable and informed by monitoring and evaluation). Implementation would adopt a principles-based approach with five cross-sectoral principles.

4. AI Standards Hub

The Alan Turing Institute was appointed to develop an AI Standards Hub in association with the BSI and NPL The purpose was to advance trustworthy and responsible AI through the use of standards as governance tools and innovation mechanisms. The Hub focuses on knowledge sharing, community and capacity building, and strategic research to support debate, inform and strengthen AI governance, increase multi-stakeholder involvement in AI standards development, and facilitate the assessment and use of published standards produced by Standards Development Organisations (SDAs). The Hub would adopt a global perspective, emphasise stakeholder inclusiveness, and promote interdisciplinary cover. The work of the Hub is based on four pillars with an Observatory (through the use of interactive libraries and a searchable AI Standards Database), Community (and collaboration), Knowledge (and training), Research, and analysis. The Hub would generally appear to focus on technical standards rather than wider codes and ethical standards.

5. Bletchley Summit Declaration on AI Safety

The UK Government hosted an international Summit on AI Safety on l and 2 November, 2023 which was attended by senior political and technology representatives from twenty-eight countries including the US and China. Bletchley Park had been selected as this was where code breakers, including Alan Turing had broken the German Enigma Cipher during World War II and the first electronic computer was used. Representatives attended from Google, Google DeepMind, Microsoft, OpenAI, IBM, Meta, Palantir, Inflection AI, and for AI, Stability AI, Sony, Alibaba and Tencent, the EU, the UN, as well as x.AI. Elon Musk attended and met U.K. Prime Minister Rishi Sunak individually on the final evening.

King Charles III stated that the international community had to address the risks posed by AI with the same sense of urgency and unity as had been applied to climate change. Prime Minister Sunak had earlier warned that “humanity could lose control of AI completely” if the technology was not given proper oversight even if it did create significant new opportunities although the U.K. would not try to regulate AI formally immediately. The Prime Minister explained at the Summit that the purpose was to promote “an open and inclusive conversation” to seek a shared understanding all the relevant risks that AI posed. The Prime Minister stated that “safely harnessing the technology could eclipse anything we have ever known” and hoped that “history proves that today we began to seize that prize” which “will have a written a new chapter worthy of its place in the story of Bletchley Park.” He also wished to bequeath “an extraordinary legacy of hope and opportunity for our children and the generations to come.”

A formal communique was issued at the summit and signed by all attendees. This consisted of four resolutions and six affirmations with regard to AI safety. AI should be designed, developed, deployed and used in a manner that is safe and to be “human centric, trustworthy and responsible”. The parties affirmed the need for the safe development of AI and for the transformative opportunities of AI to be used for good and in an inclusive manner globally. The necessity and urgency of dealing with relevant risks was accepted, especially with regard to “frontier” and foundation models. The parties resolved to work together an inclusive manner through existing international fora and other relevant initiatives. Safety had to be considered across that AI cycle, although there would be a focus on more powerful and potentially harmful systems with appropriate safety testing being established through evaluations and other measures. Scientific understanding and risk based policies would be shared with an internationally inclusive network of scientific research being established in addition to existing multilateral, plurilateral and bilateral collaboration. The parties resolved to sustain an inclusive global dialogue with further meetings to be scheduled for 2024. The Summit was closed with a separate event at Lancaster House in London during which the Prime Minister interviewed Elon Musk on the impact of AI with the findings and commitments of the Summit being endorsed.

The event was of significance in bringing senior political and technology representatives together and in agreeing the adoption of a formal policy on government led safety evaluation and testing to prevent unexpected and unnecessary harm and injury. Securing attendance and agreement between the U.S., E.U. and China was of specific significance. This was confirmed with the “Bletchley Declaration” communique agreed at the Summit. The objective was to begin a process of continuing testing and collaboration. It was separately confirmed that an AI Safety Institute would be establish within the U.K. with a parallel institute also being set out in the US and potentially elsewhere. This was also intended to represent the beginning of a process of continued contact and communication and with the exchange of valuable research and assessment results. While the final declaration have lacked detailed substantive content, this confirmed the establishment of a continuous testing and review process with the full endorsement and cooperation of the largest and most significant and potentially dangerous AI companies in the world.

6. Parliament

The House of Commons’ Science, Innovation and Technology Committee produced a report on AI Governance on 31 August 2023, which highlighted many of the risks referred to above. The House of Lords maintains an Artificial Intelligence Committee in the UK Parliament with an AI in Weapon Systems Committee being set up to consider the implications of AI on the conduct of warfare.

H. AI Regulation Comment

The most substantial set of regulatory measures adopted within the EU with its AI Act. Firms are required to assign that activities across one and four risk levels with certain activities been prohibited outright, including cognitive, behavioural manipulation, social scoring, and real time remote biometric, identification and facial recognition. Appropriate risk assessment and cost benefit analyses have to be conducted with transparency and accountability obligations imposed. The conduct of effective risk assessments may become an important part of standard business practice and compliance. Criticisms are that there regulatory system established is overly onerous and will lead to market and research disadvantage. Other studies indicate that regulatory compliance may create competitive advantages.

The Chinese provisions generally support the promotion of innovation with the safe and reliable development of relevant software, tools, and data sources. This is the last to be subject to court socialist values with providers being responsible for maintaining the legitimacy of data and compliance with the relevant intellectual property and personal data obligations. Breach is subject to effective criminal sanction. The CAC confirmed separately, that technology companies had to update their technology within three months where inappropriate content had been generated. The measures attempt to balance the Chinese government’s concerns with generative, AI applications, and its desire to promote development more generally. China had attracted $17 billion on AI investment in 2021 which represents in 1/5 of the global total and with AI expected to produce up to $600 billion in economic value annually. The measures represent more interventionist approach, but still one drafted in relatively general terms with more open objectives and targets.

The US approach has generally been more permissive and market tolerant, although specific federal agencies have acted within the scope of their authority, while other important state initiatives have been adopted. The Biden administration has confirmed its intent to act over time. The current regulatory framework is nevertheless based on voluntary guidelines, including principally in NIST AI Risk Management Framework (26 January 2023), which was initially proposed on 29 July 2021. The Federal Trade Commission (FTC) has taken action in relation to deceptive and unfair practices, including under the FTC Act and the Fair Credit Reporting Act and Equal Credit Opportunity Act. States have also begun adopting laws assessing ecosystems and development, with more specific laws on Byers and unfair discrimination being adopted in New York and California.

The different types of measures adopted or proposed generally reflect separate sets of underlying societal values or mores and national or political priorities. The E.U. is essentially protective, with China directive and the U.S. permissive at this stage. The E.U. and China are both more interventionist, focusing on direction, while the U.S. is more market-based, with an emphasis on innovation and competition. The U.K. policy is more pragmatic and reactive, which creates a more flexible but less substantial oversight framework going forward. As with all forms of the new regulation, the relative success of the particular model adopted will depend upon the detailed implementation and operational compliance measures and mechanisms adopted.

X. Artificial Intelligence and Financial Markets

Banks and other financial institutions have used machine intelligence and AI in addition to new forms of FinTech for some time. This is referred to as creating a new field of “Artificial Financial Intelligence” (AFI). New technology and FinTech has been increasingly used in alternative financing platforms, retail trading and investment platforms, institutional trading platforms and with distributed ledger technology. FinTech can be mapped across the principal financial services sectors including payments, insurance, planning, lending and crowdfunding, blockchain, trading and investments, data and analytics and securities. More specific functions include sentiment indicators, social trading, trading signals and AML (CFT) and fraud detection. AI is considered to constitute an integral part of the financial industry for over a decade, although it is expected that there will be even more substantial innovation through the use of generative AI such as with ChatGPT. The financial industry has been automated through the use of computers for over fifty years with the introduction of automated teller machines (ATM). AI has been used since the late 2000s and 2020s especially through chatbots with fraud detection systems adopted since 2017 and automated bond sales by 2019. Further significant personalisation of financial services is expected through the use of generative AI.

The implications of new technology, including AI and Robotics (AIR), have been considered by a number of bodies over time. This includes the FSB, Basel Committee on banking Supervision, International Organisation of Securities Commissions (IOSCO) and International Association of Insurance Supervisors as well as the Organisation of Economic Cooperation and Development (OECD). Separate papers have been produced by the European Commission and United Nations, Organisation for Economic Cooperation and Development (OECD) and G20 on the use and application of new technology more generally.

A. Financial Stability Institute (FSI) and Financial Stability Board (FSB)

Financial Stability Institute (FSI) published a paper on AI and regulatory expectations in August 2021. The FSB had earlier examined the development of AI and machine learning in relation to four possible test cases with customer focused front office appliances, operations back office functions, trading and portfolio management and regulatory compliance (RegTech) and supervision (SupTech).

1. Front Office

Artificial Intelligence is often used for credit quality and scoring, selling and pricing insurance policies and for client contact virtual assistance (“chatbot”) purposes. Credit scoring can be carried out using algorithms applied to transaction and payment history data as well as new forms of additional, unstructured or semi-structured data including social media sources, mobile telephone use and text message activity which build in qualitative factors including consumption behaviour and willingness to pay. Data leverage makes credit assessments more efficient with greater, faster and cheaper segmentation of borrower quality.

2. Back Office

Machine sensitive programmes can be used to increase capital optimisation within banks, improve model risk management and make market impact analysis more efficient. Machine intelligence can increase the efficiency, accuracy and speed of capital optimisation in banking and in derivatives areas including for margin value adjustment (MVA) calculation purposes. AI and machine learning may assist in the conduct of back testing and model valuation and in stress testing. This can also improve market impact analysis with “trading robots” reacting to market changes, assessing trade impacts, identifying behavioural patterns and adjusting timing decisions.

3. Trading and Portfolio Management

AI and machine learning can improve research and development as well as sell side trade execution and buy side portfolio management. Increasingly large volumes of data are produced requiring analysis for risk modelling and client service purposes as well as separate portfolio management. Deep learning has also been increasingly used by systematic (quant) funds.

4. RegTech and SupTech

AI and machine learning can be used to improve firm compliance (RegTech) and supervisory monitoring (SupTech). Investment in RegTech was expected to reach $6.45 billion by 2020. RegTech can examine unstructured data using machine learning and Natural Language Processing (NLP). This could assist comply with specific regulatory requirements including, for example, MiFID II and the AIFMD with authorities separately working on making specific regulatory obligations machine readable. Supervision can be improved through the use of machine learning in terms of systemic risk identification and in assessing risk propagation channels. This could be used to detect, measure, predict and anticipate market volatility, liquidity risk, financial stress, housing prices, and unemployment factors. Central banks can use AI to assist with monetary policy and other assessments.

B. Basel Committee on Banking Supervision

The Basel Committee on Banking Supervision has been reviewing the supervisory implications of the use of AI and machine learning (ML) in the banking area. The Committee held a workshop on this with its Supervision and Implementation Group (SIG) on 3 October 2019. AI was included within its examination of the Digitalisation of Finance within its work programme for 2023–24 with other technological developments.

C. IOSCO

AI has been included within the International Organisation of Securities Commissions (IOSCO) Board’s priority areas of activity. IOSCO published a report and guidance on AI and market intermediaries and asset managers in 2021 with the six major recommendations focusing on senior manager responsibility, adequate testing and monitoring of algorithms, necessary skills, expertise and experience, service provider relationships, disclosure, and maintaining appropriate controls.

D. International Association of Insurance Supervisors (IAIS)

AI and machine learning have been studied by the International Association of Insurance Supervisors (IAIS) with FinTech developments in the international insurance sector. AI is included within the IAIS’s wider work on Digital Innovation.

XI. Artificial and Machine Intelligence and Law

Difficult issues arise with regard to moral and legal responsibility and liability in the AI and robotic fields. A number of complex issues are involved. A substantial part of the discussion with regard to AI and robotic liability and especially in philosophy is concerned with moral agency and artificial moral agents (AMAs). A clear distinction has nevertheless to be drawn between moral agency and legal agency. Agency is generally concerned with the capacity to act and agency with the exercise or manifestation of this capacity. In philosophy, moral agency can either be considered in terms of moral responsibility or moral awareness. This can also be examined in terms of the ability to form moral judgements concerning right and wrong and to be held accountable for the decisions taken. Agency in law is concerned with the scope of capacity of one person (the agent) to act on behalf of another (the principal) in accordance with the terms of the express or implied authority conferred. This is concerned with legal capacity to act with the agent generally always having full independent legal capacity to engage in contractual relations on behalf of the principal.

A series of more specific debates then arise with regard to the ethics of artificial moral agents (AMAs). A large number of papers have been issued although philosophical and legal perspectives are rarely separated. Much of this again reduces itself to issues of legal capacity and legal liability which are based on legal personality. The AMA debate can then be reformulated in terms of a number of sub-fields including Artificial Physical Agents (APAs), Artificial Intelligent Agents (AIAs), Artificial Moral Agents (AMAs), Artificial Legal Agents (ALAs), Artificial Super Agents (ASAs) and Artificial Network Agents (ANAs). This can also be considered with Artificial Legal Personality (ALP), Artificial Legal Capacity (ALC) and Artificial Legal Liability (ALL). Each of these is considered in turn.

A. Artificial Physical Agents (APAs)

Moral agency, and artificial moral agency, discussion generally proceeds on the assumption of the existence of an independent entity or agent that can act on a separate or freewill basis. This is usually considered in terms of a separate person or robot. A robot is defined for the purposes of this paper in terms of mechanical function operating on either a pre-programmed, remote or autonomous basis. This will include remote or drone function. A robot is a form of controlled machine as opposed to a simple tool. Robotics is distinct from AI which is concerned with data or neuroprocessing and decision taking with robotics involving some form of actuator which results in physical action on instruction. The most difficult issues arise with regard to Artificial Autonomous Intelligence (ATI) (or AutoBots) in practice which involves separate or independent processing.

Moral agency and artificial moral agency will often then involve some form of Artificial Physical Agent (APA) which carries out the conduct in question. This may include either a separate physical entity, such as a robot, or a software system that can instruct other physical systems to act. This may be referred to as an Artificial Program Agent (APrA) or possibly a SoftBot. These forms of agent create a physical result either on a direct or indirect basis.

B. Artificial Intelligent Agents (AIAs) and Artificial Autonomous Agents (AAAs)

A further distinction may then be drawn between Artificial Physical Agents (APAs) and Artificial Intelligent Agents (AIAs) which involves some form of artificial processing function. While AI writing is generally considered in terms of non-biological intelligence more generally, the most significant issues will arise with regard to Autonomous Intelligence (ATI) as noted. Artificial intelligence involves any form of mechanical or digital data or neural processing on decision taking although the more significant issues will arise where this is carried out on some form of independent or autonomous basis. Ethical issues may then be considered in terms of Artificial Autonomous Agents (AAAs).

Different degrees of autonomy may be considered. This may include some form of basic operational independence or autonomy, timing autonomy, functional autonomy, declination (refusal) to act, and any other independent processing. The nature of autonomy may be examined in further detail to distinguish different degrees of independence within specific programming or cybernetic systems.

C. Artificial Moral Agents (AMAs)

Separate ethical issues have been identified within the area of artificial moral agency. The core issue that arises is whether some identifiable agent should be considered to constitute an artificial moral agent (AMA) with separate moral responsibility or only treated as a machine. This is again most commonly considered in the field of robotics. A number of arguments may be developed for and against assigning moral responsibility to AMAs. Arguments in favour of this include separate physical identification, anthropomorphic semblance, direct causal contact, assumed moral awareness, assigned moral responsibility. and consequent assumed moral liability.

A series of arguments against moral assignment can also be constructed. These can be summarised in this paper in terms of entity irrelevance, direct and indirect causal control, anthropomorphic or humanoid irrelevance (or anthropomorphic, humanoid or android fallacy), limited literacy or technical awareness, residual lack of legal capacity and consequent personality, lack of effective financial resources to cover loss and lack of wider social recognition and acceptance. Many of the arguments that arise are most commonly based on anthropomorphic identification, equation and assumed equivalence with any robotic systems that begin to assume humanoid appearance, or character traits, being treated as a human. This has to be resisted with the core issue being whether the system should be assigned separate legal capacity which is, in turn, dependent on separate legal personality.

D. Artificial Legal Agents (ALAs)

Legal personality can generally be conferred on any entity by monarchs historically or by legislative direction as appropriate. Any such determinations are often based on a large number of non-legal considerations, including from a historical, political, moral, philosophical, metaphysical, and theological perspective. A statutory right to incorporation was conferred for the first time in Great Britain under the Joint Stock Companies Registration and Regulation Act 1844, on application by seven persons, which established a Registrar of Joint Stock Companies. Limited liability was only introduced subsequently under the Limited Liability Act of 1855. A number of theories are developed to justify or explain legal personality including a fiction, concession, purpose, symbolist and realist theories. States are also recognised as legal persons under Public International Law although the basis for the recognition of international and non-governmental organisations, corporations and individuals under Public International Law may be less clear. Arguments for and against the creation of legal personality can be summarised in this paper in terms of independent identity, autonomy of decision taking, physical agency capacity, moral agency capacity, and legal capacity. None of this is nevertheless conclusive.

Many arguments in favour of the conferral of Artificial Legal Personality (ALP) on AI and machines and are again often based on the increased approximation of human characteristics and anthropomorphic identification as noted. This may be summarised, or classified, in terms of a humanoid assumption or anthropomorphic, humanoid or android fallacy referred to. None of these arguments of themselves justify the conferral of legal personality on computer or AI programmes or systems. Many commentators argue that robots do not qualify for the attribution of legal personality. Other writers maintain that some form of modified or reduced legal personality could be conferred over time. Other epistemological and ontological arguments can also be developed against robot personality.

Legal personality is generally only attributed to humans as natural persons and legal persons in the form of corporations in law either created by royal or parliamentary grant and more recently statutory direction through public and private company law. Legal personality generally then only operates on the basis of a form of continuing control or delegation with humans remaining responsible for all of the decision taking within a corporate entity structure. All important judgments and decisions are taken through the senior management structure with formal systems for the convening of meetings and the keeping of minutes and records. Other actions may be carried out by more junior managers, or employees, on instruction on a delegated basis although again only within a more formal human based decision taking hierarchy. All of this is subject to separate shareholder oversight through annual and general shareholder meetings. Corporate legal personality is then ultimately always based on embedded and aggregate or residual human direction and control.

Parliaments and legislatures could assign legal personality to robots and other AMAs which is ultimately a political and social decision. While anthropomorphic semblance, or appearance, may encourage some commentators to promote the assignment of legal personality, there is no legal or philosophical justification for doing so. Machines, including sophisticated autonomous intelligent and robotic systems (AIRS), are ultimately only machines which should arguably be subject to continuing human control and direction. The autonomous nature of these systems, of itself, does not justify the conferral of legal identity and personality. The absence of separate independent legal identity, responsibility and liability, in turn, requires that such systems are always subject to some form of human control and responsibility and never allowed to operate on a fully autonomous and unmanaged basis. This creates a form of “autonomy limit” or “absolute residual human control” principle or doctrine.

E. Artificial Legal Personality (ALP) Artificial Legal Capacity (ALC)and Artificial Legal Liability (ALL)

A possible form of qualified Artificial Legal Agency (ALA) could still be developed for use in connection with the most advanced forms of Artificial Intelligent Agents (AIAs) and, in particular, full Artificial Autonomous Agents (AAAs). The objective would be to confer a limited form of Artificial Legal Capacity (ALC) on AI or robot systems to allow them to act on behalf of their principal in entering into legal contracts, for example, as part of the Internet of Things (IoT) or a new, in this paper, “Internet of AI” (IoAI) or “Internet of Robots (or Bots)” (IoR (or IOB)). This could be used, for example, for online or physical shopping, charging electric vehicles (EVs) or parking and flight and travel bookings. The AI or robot would acquire a form of delegated legal capacity to allow them to act and contract on behalf of their owner or instructor.

This could, in theory, be based on a qualified form of Artificial Legal Personality (ALP) although this would not be necessary. This would simply operate as a type of qualified legal capacity to contract and hold property, subject to an adjusted form of agency law. The principal would remain directly liable for all conduct, or misconduct, of the agent which would cover all actions within the express and implied authority of the agent or even apply on a strict liability basis subject to a highly restricted set of permitted derogations. This would create a form of full scope responsibility and strict no fault AI or robotic liability. The European Parliament has recommended the issuance of insurance although this was with the conferral of possible legal personality on robots. The creation of legal personality was nevertheless rejected in an open letter prepared by AI and robotics, industry, law, medical and ethics experts.

All of this would in practice be further restricted only to being available where appropriate liability insurance was in place for all loss to ensure proper remedy and compensation as necessary. This again could be applied on a strict liability basis without fault. The consideration of any application for insurance liability could be made based on a form of impact assessment which would determine all possible sources of injury, claim, and loss. Where this was not available, Artificial Legal Agency (ALA), and, where relevant, Artificial Legal Personality (ALP), would not be permitted and the scope of any associated computational or programme authority would have to be limited as necessary.

All other cases would be determined on a simple tool, or machinery, basis with the person operating or controlling the device being responsible for any loss on a strict liability basis as noted. This would allow recovery in all cases except where another person was responsible or the victim was contributory negligent either in full or part. This extended third party responsibility could apply where designers, programmers or modifiers were responsible as necessary for the fault depending upon the circumstances. In practice, all of this could be set out in contractual documentation and other disclaimers subject to compliance with the legal liability framework created as outlined.

The overall effect of this would be to establish a system of core responsibility for risk and loss with the government, institution or individual setting up or operating the AI program being liable for any associated damage caused. This could also apply where military applications and operations were involved with this being extended again to impose liability for genocide on all parties concerned, including governments, companies or institutions and individuals. Such systems would only be usable in accordance with the relevant rules of engagement which would generally have to comply with all other restrictions and principles of Public International Law including Fundamental Human Rights (FHR) and the Laws of War and any new Laws of AI & Robotics (AIR).

F. Artificial Super Agents (ASAs) and Artificial Network Agents (ANAs)

The development of AI systems is generally explained in terms of the creation of “Narrow” or “Technical AI systems or Levels” (NAILS or TAILS in this paper) which carry out specific functions and wider more general AI, or open AI systems, that can perform a full range of functions, similar to human intelligent operations. More recent innovations in general, or open, AI systems use processing models that approximate to natural neural functions with the flexibility and adaptability that these facilitate. A high degree of autonomous function may then be possible with the construction of such general or open AI models.

The degree of computational power and autonomy will be further increased again with the creation of further forms of “Super Artificial Intelligence Level” (SAIL) systems or “Super Artificial Intelligent Network Technologies” (SAINTs). This is expected to arise principally through forms of embedded recursion which will allow systems to self-correct and self-advance on an independent, internal and exponentially fast and self-directed basis. Machine processing capacity and supremacy will inevitably far exceed that of humans. The residual issue that arises is whether such forms of super AI, or “Super Autonomous Intelligence” (SaI), should ever be assigned legal personality. While this may be reviewed further over time, it can be argued that this would still not justify the conferral of legal personality for all of the reasons explained above.

Additional issues may arise with regard to network systems with the proposed establishment of interconnected Super AI models that bring together the computational power and capacity of a large number of super AI systems. This may also create a form of new alternative AI Internet (“AINet”) which connects super AI systems together under the “Super AI Network Technology” (SAINT) framework referred to. This could massively increase the computational capability of the total system. Reference has also been made to the possible creation of computer systems on a planetary scale which includes creating a “Jupiter Brain” or “Matrioshka Brain”. Equivalent issues arise in establishing General AI, Super AI and Network AI models. It still remains questionable whether the equivalent legal personality that is conferred on natural persons and corporations as aggregate biological persons could be attributed to such systems in any of these cases. Some new form of “Artificial Legal Capacity” (ALC), or qualified ALC or limited ALP, could be considered in future if circumstances change although this may only be necessary for liability and insurance purposes which can be managed without this.

XII. Artificial Intelligence and Machine Technology

Uncertainty remains with regard to the possible limits and effects of machine intelligence and artificial intelligence. Machine intelligence can be used to refer to programmable, or trained, forms of intelligence using computers and algorithms. Difficulties arise in determining the limits of machine intelligence and artificial intelligence, the nature of machine sentience and whether machines can ever achieve consciousness, the possible need for machine ethics or equivalent programmes or protocols, the inevitability of creating self-reinforcing Artificial Super Intelligence (ASI) and the merging of human and machine intelligence. These issues can be considered further in terms of machine intelligence, machine sentience, consciousness, consciousness theories, intelligence and cognition and the forthcoming machine Singularity.

A. AI & Machine Intelligence

Machine intelligence can be used to refer to machine reading and machine learning and to where programmes develop a general cognitive ability and approach Artificial General Intelligence (AGI). Alan Turing proposed an assessment test to determine whether a machine could approximate general intelligent behaviour in 1950 with an evaluator assessing the difference between a human and a machine expressing ideas through natural language understanding (NLU) or natural language interpretation (NLI). Turing created an “Imitation Game” with the interrogator determine whether a machine can act in an indistinct manner from a human rather than think as a human. It was accepted at the Dartmouth Summer Research Project on Artificial Intelligence 1956 that a machine could simulate intelligence. The French philosopher, René Descartes (1596–1650), had earlier distinguished automata, or self-operating machines, from humans in terms of their ability to construct three linguistic responses.

American researchers, Allen Newell and Herbert Simon claimed that “symbol manipulation” was the essence of human and machine intelligence in 1963. The physical symbol system hypothesis (PSSH) stated that a physical symbol system was the necessary and sufficient means for general intelligent action. A mind may nevertheless process higher level more complex symbols rather than simple representations of physical matters. The American philosopher, Hubert Dreyfus, accepted that a physical device could reproduce the behaviour of the nervous system. American philosopher, John Searle, acknowledged that a man made machine could think. Dreyfus also claimed that human intelligence was dependent on unconscious instincts beyond intentional symbolic manipulation.

Machine learning can duplicate human intelligence in terms of results although that is distinct from replicating its operations. It is arguable that machine learning may never replicate free thought and free will with open choice, instinct, emotional reaction and preference, and the essential unpredictability of human thought and the human condition. For the purposes of this paper, intelligence has been defined as the ability to carry out one or more neural functions, or processes, on a programmed, directed, or autonomous basis. Identity would be a condition of self-awareness. Awareness can be understood as a form of cognition or recognition of ability to carry out one or more neural or processing functions. Understanding is defined as the appreciation of meaning, sense and intention with attached meaning. Knowledge is defined as understanding, appreciation and awareness, including physical and emotional (chemical) experience and symbolic language recognition.

Artificial Intelligence can then be defined as any separate, or independent, data or neural analysis, processing or decision taking system that operates on a programmed, directed or autonomous basis without human control or direction. This may be mechanical, as with a mechanical computer, or electronic or digital, and vary depending upon the degree of autonomy or independence provided for. Autonomous Intelligence (ATI) would involve a separate, or independent, data or neural analysis, processing or decision taking system that operates on a fully autonomous basis without human control or direction. This is often implied in many uses or references to artificial intelligence. The term Artificial Agent or Autonomous Agent (AAA) can be used to distinguish this from more general AI.

B. AI & Machine Sentience

Machines will be able to achieve multiple layers, or degrees, of machine sentience. Searle distinguished between “weak AI,” in which a physical symbol system can act intelligently, and “strong AI,” with a physical system having a mind and mental states. This is still distinct from consciousness. The “hard problem” of consciousness refers to explaining how organisms experience phenomena, or qualia, such as pain or pleasure. Earlier philosophers considered whether the mind and body were separate (dualism) or everything constituted the same matter (monism or materialism). Computationalism, or the computational theory of the mind, considers the brain and thinking as a form of program computation.

Brain operations are examined in neuroscience and neurobiology. This includes the study of perception and stimulation, mental and neural processors, the different functions carried out by specific anatomical sections of the brain and the development of the human nervous system. This attempts to explain the physical operations of the mind, experience, and understanding, and generally assumes a form of physicalism or materialism.

It is arguable that while machines may arrive at a high level of machine intelligence and sentience, including with some degree of pre-programmed self-awareness, they may never be able to achieve the equivalent of biological consciousness in a human form although the validity of this may simply be dependent upon how the various terms are defined.

C. Biological Consciousness

Further difficulties then arise with regard to the meaning of consciousness more generally and with regard to machine intelligence and sentience more specifically. Conscious and consciousness are complex, combination, composite and polymorphic, or polysemous terms. They are also contestable to the extent that they may involve different value senses. The etymology of the term conscious is concerned with common knowledge or knowing together. Aristotle (384–322 BC) considered consciousness in terms of perceptual awareness. Thomas Hobbes referred in Leviathan to knowledge of the same facts. John Locke (1632–1704) discussed consciousness in terms of perception. The English essayist, Samuel Johnson (1709–1784). considered this in terms of internal feeling.

Consciousness is often examined in terms of being a mental as opposed to a physical process. The distinction is discussed in terms of the mental physical (or mind body) problem as noted previously. The French philosopher, Descartes, discussed this in terms of dualism (referred to as Cartesian dualism) with distinct realms of thought (res cogitans) and the physical or material realm (res extensa). Dualism includes substance dualism, which examines the mind or thought as a distinct substance, and property dualism, which examines substance in terms of physical and mental properties. Dualism is contrasted with monism which includes physicalism (matter includes the mind), idealism (with reality being a mental construct), and neutral monism (with reality being neither mental nor physical but a combination). How individual experiences, or phenomena (qualia), generate the relevant sensory input is, as noted, referred to the hard problem of consciousness.

Whether a machine can replicate human intelligence can be considered in terms of machine or artifact intelligence or consciousness. The English polymath Charles Babbage (1791–1871) designed a universal Analytical Engine in 1837 following his automatic mechanical calculator, the Difference Engine, produced in the 1820s. Machines were later described as being computationally complete or “Turing complete” following Turing’s mathematical model of a universal computation machine (the Turing machine). Machines are Turing complete when they can carry out any computational function and Turing equivalent where one machine can simulate the other. The Turing test (or Imitation Game) is used to determine whether the machines can think by assessing their ability to convince an independent observer that they can replicate the quality human responses through the production of machine responses. Alan Turing responded to all of the more traditional objections raised to machines developing of full artificial intelligence.

D. Consciousness Theories

The nature of consciousness has been examined in terms of a number of separate theories These can be classified in different ways which reflects the complex, combination, composite, and polymorphic or polysemous nature of the terms concerned. These can essentially be divided into dualist, or combination, and material, or physicalist, schools of thought which reflect the underlying mind body problem. A number of different sets of ideas can be identified although many of these have overlapping features.

Dualism reflects Descartes’ distinction between res cogitans and res extensa with two separate realms of physical and mental existence. Specific elements of this include, as noted, substance dualism (Cartesian dualism), property dualism (with dual aspect physical and phenomenal instantiation attribution), including fundamental property dualism and emergent property dualism, neutral monist property dualism (with a larger single unitary reality), and panpsychism (with everything capable of being conscious).

Material and physicalist theories are essentially monist (non-dualist) and attempt to account for consciousness without reference to non-physical elements. These include eliminative (or eliminated materialism) and denial theories or identity (including type-type identity) theories that equate experiences with reality. Other materialist schools explain consciousness in terms of physical relations, which include functional or reductive physical theories. Reference is also made to emergentism with new properties in a system emerging out of the relationship or interaction between other properties.

More specific explanations of consciousness can be classified in terms of neural (including neural correlate or frequency) theories, representational (mental association) theories (including First-order Representational (FOR), Higher Order Representational (HOR), and Hybrid States), cognitive (including Multiple Drafts Model (MDM) and Global Workspace Theory (GWT)) theories, and quantum theories. These are all reductionist to the extent that they explain consciousness in terms of other more specific properties or components. These formulations also include Information Integration Theory (IIT), Reflexive (self-awareness) higher order theories, and other cognitive theory (such as Attended Intermediate Representation (AIR)), in addition to narrative interpretive theories (the Multiple Drafts Model (MDM) above) and neural, representational, and quantum theories as referred to.

This remains a complex area of debate. Many elements or parts of these different approaches may be necessary to construct an overall composite final solution. While the detailed philosophical examination of the issues concerned remains of interest, some of the most interesting insights may arise in the areas of empirical neuroscience and neural biological study. This has produced invaluable new perspectives on such complex issues as alertness, awareness, memory, attention (and decision-taking), cognition, and language. Nevertheless, it is unnecessary to produce a final determinative position on all these issues in order to consider the relationship between biological consciousness and artificial intelligence in further detail.

E. Intelligence or Cognition Wall

Intelligence is a complex phenomenon which has been examined from a number of inter-disciplinary, perspectives, including biology, chemistry, neuroscience, psychology, and psychiatry. Many aspects of brain function have still not been fully determined and resolved, including with the nature of intelligence and consciousness. It is nevertheless possible to identify a number of core functions, or operations, within these processes. For the purposes of this paper, an “Intelligence or Cognition Wall” can be constructed to identify the principal core elements involved and to allow biological and machine functions to be contrasted. It is then possible with this to compare biological consciousness and machine sentience in more detail to confirm the specific areas in which machine performance may be more effective than humans but also assess the inherent limitations within a machine intelligent architecture.

Intelligence can then be considered in terms of a number of separate functions which may be carried out on an artificial and mechanical or biological basis. These would include the following:

(1) The carrying out of essential motor operations within the body, such as managing heart rate, blood circulation, breathing, digestion and temperature control, with the brain historically emerging principally as a resource management device through evolutionary pressure.

(2) The brain has to receive and process all of the incoming sensory inputs received, including visual (sight), auditory (hearing), tactile (touch), olfactory (smell), gustatory (taste), vestibular (movement), and proprioceptive (body awareness).

(3) Human brains can carry out multiple processing, or calculation, functions at any time without the need for reprogramming (being Turing complete) which can be summarised in terms of logic or reasoning and arithmetic, algorithmic or probabilistic calculations.

(4) Environmental cognition or “Environmental Recognition” (ER) involves an entity identifying its immediate and wider surroundings and ordering and understanding those surroundings or environment.

(5) Identity cognition involves an entity separating itself from its environment and developing a sense of personal awareness and function distinct from its surroundings which is in contrast to less developed organisms or systems which do not separate themselves from their surroundings and environment.

(6) Social cognition involves the identification of other entities of the same species, or genus, within the environment which allows the formation of relations with common entities which specifically occurs naturally with humans in terms of family, friends, and community relationship construction.

(7) Communication cognition can be understood principally to arise through social interaction and engagement with other members of the species or genus, in particular, through the attachment of definitions, meaning and understanding to specific ideas, items and symbols which creates the formation of a common vocabulary and language.

(8) Emotional cognition arises through complex chemical forces management within biological systems and specifically human neocortical and limbic functions with human beings being fundamentally electrochemical (rather than only electro binary) organisms and to a significant extent driven by chemical compositional reactants with decision taken often being based on emotional and chemical reactions rather than on a strictly rational or reasoned basis.

(9) Complex code cognition and conflict reconciliation, and original code construction, occurs with humans constantly having to take complex decisions with regard to reconciling multiple conflicting objectives and relevant value systems or standards at any point in time which may only be possible in machines through targeted pre-programming and training.

(10) Causation cognition provides humans with an inherent natural sense of enquiry and the need to investigate and understand with this only being replicated in machine systems through prior programming which will then often only operate on a random, selective and correspondingly limited basis.

(11) Emotional attribution cognition (theory of mind) and alternative condition cognition (imagination) allow humans to understand and predict other peoples’ behaviour, principally through a form of extended sympathy or empathetic transfer, as well as to develop models and alternative explanations, outcomes and scenarios through highly original, creative, imaginative individual and collective thinking.

(12) Control cognition (or free will) consists of the ability to recognise or experience all of these other cognitive processes and move freely between these other functions and the consequent assumption of control and responsibility that this allows over a person’s life and existence.

The most important of these cognition functions may be the last of control cognition and ability to manage and switch between all of the other functions. This in aggregate allows humans to understand their environment and themselves, and to take control over their immediate conditions and lives. This effectively allows people to discharge the implied immediate primary functions of living organisms, which are survival and reproduction, as well as to select, specify or determine a new higher purpose, or purposes, for their own existence and lives and to attempt to follow and secure the objectives set.

Consciousness can be then understood to constitute the condition of internal and external awareness and with the ability to control all of these other neural or cognition functions and to switch between them on an unrestricted basis. It is a condition of aggregate direction or control over the other separate neural or processing functions which can be associated with the idea of free will and control over life and life’s purpose and application. This would correspond with cognition level twelve and the aggregate condition of being able to select and move between other cognition functions on an open, free and unrestricted basis with the additional condition of the awareness associated with this.

The overall effect of this is that consciousness is not a single but complex or combination of conditions. Consciousness can be considered to constitute a contestable term with multiple meanings as noted. Consciousness is complex cognition and consists of a multi-composite aggregate form of condition, including each of the other cognition functions referred to above. This would have as its core identity, motor and sensory input management supported by environmental, identity and communication meaning or understanding. Understanding and knowledge are further developed through causation and enquiry with complex conflict resolution generating common continuous values or codes of conduct and ensuring order and stability within communities. Social awareness arises through sympathy and empathy. All of this allows people to control their existence both in terms of managing immediate necessities of life as well as planning and securing longer term goals and aspirations.

Machines can develop complex programming capability which will allow them massively to outperform humans in terms of processing capacity (under cognition level three above). This can be summarised in terms of data volume (capacity), validity (quality of retrieval and accessibility), velocity (speed of processing), veracity (accuracy of processing) and verification (finality). Machines could be programmed to manage motor operations (as with medical machinery) and process mechanical sensory input as well as resolve pre-programmed conflicts using pre-programmed value systems. They can be programmed to have a sense of limited identity and to be aware of their primary and any secondary programmed instructions or functions. It is nevertheless questionable whether this sense of identity can arise naturally and could evolve further. The idea of the creation of spontaneous self-awareness from mechanical processes and function is highly questionable. Machines may not be capable of original identity creation and open complex conflict resolution nor the development of associated machine specific (individual) and agreed collective moral values. As they are only mechanical devices, they cannot process chemical inputs and may therefore not be able to develop emotional sensitivity and empathy without programming. Substitute forms of mechanical sensory input may still be creatable over time (including through genetic engineering or animaloid tools) although this may never be comparable with natural human processes. It is also unclear whether machines could be programmed to have an open investigatory capability which would identify physical and metaphysical points of enquiry for sequential investigation and solution. It is specifically unclear whether they could, for example, develop non pre-programmed philosophical enquiry rather than simply respond to questions and prompts. It is unclear more generally whether they could ever develop their own sense of purpose and objective on an open discretionary and non pre-programmed and non-human directed basis.

The effect of this is that while machine sentience may increasingly approximate human consciousness, this may always remain distinct and limited due to the natural open and autonomous nature of human biological processes and systems. Whether this is purely biological and neural or is dependent on other non-physical components remains unclear. This additional element may be considered to correspond with the metaphysical idea of the human soul which exists separately from the material body and which may be described as immortal. Greek philosophers, such as Socrates (470–399 BC) and Plato (428–348 BC), considered this in terms of the psyche (breath). Aristotle (384–322 BC) referred to this as the “first actuality” and considered that the intellect (logos) was immortal. Aristotle was followed by the Dominican priest, Thomas Aquinas (1225–74 AD). This has been re-examined subsequently, for example, in terms of the British philosopher, Gilbert Ryle’s, “Ghost in the Machine” which represents and corresponds with Cartesian dualism, which Ryle rejected. This is examined in terms of the “hard problem” in philosophy. This residual condition can be referred to as the “residuate” for the purposes of this paper. With regard to AI, this equates with the idea of an “operator” inside a machine managing the machine as noted. The biological and neuroscientific basis for the condition of awareness and control will continue to be subject to research and study. Whatever the scientific basis for this condition and the nature of consciousness, it is arguable that machines may only achieve an exceptionally wide variety of degrees of machine sentience and possibly never attain the equivalent of full biological consciousness.

F. Machine Singularity

The Singularity can be used more generally to refer to the point in time at which machine intelligence will outperform human intelligence or more specifically to the merger of human and machine intelligence which creates an exponential further growth in intelligence. The term Singularity was originally referred to by the Hungarian American mathematician, John von Neumann (1903–1957). British mathematician, Irving John Good (1916–2009), who worked with Alan Turing at Bletchley Park, predicted that an intelligence explosion with Artificial General Intelligence (AGI) could lead to a singularity in Artificial Super Intelligence (ASI). American mathematician Vernor Steffen Vinge, projected that technology would create superhuman intelligence following which the human era would end after a Singularity. American computer scientist, Raymond Kurzweil, has commented on the approaching singularity in advances in artificial intelligence. Other singularities may also arise in addition to the technological singularity referred to by von Neumann, Good, Vinge and Kurzweil. This may include, for example, world population and possibly climatic damage.

A general division is drawn for the purposes of this paper between a “Lower Singularity” and “Upper Singularity” and between non-synthetic and synthetic human machine intelligence. The Lower Singularity refers to the ability of machines be able to outperform humans on any functional capacity or capability scale. This includes Single or Narrow Lower Singularity, with regard to a limited or specific range of functions, and a Wide or General Lower Singularity, which covers any identifiable functions. The Upper Singularity then refers to the ability of machines to replicate biological self-awareness and consciousness. It is argued for the purposes of this paper that machines may never be able to reach, or breach, the Upper Singularity. Machines can already be made self-aware to a degree and an exceptionally wide range of grades, or shades, of machine sentience may be created, both intentionally and accidentally, between the Upper and Lower Singularities. The arrival of all levels of powerful machine sentience (below the Upper Singularity as defined in terms of biological consciousness), may be referred to as “the Inevitably” for the purposes of this paper, with machines out performing humans in terms of processing functions, in light of the high degree of certainty associated with this although this does not, of itself, lead to machines becoming conscious, at least, without human linkage.

Machine intelligence may then be considered to be non-synthetic with synthetic intelligence referring to combined human machine hybrid intelligence. While machines may not be able to arrive at a stage of full consciousness by themselves, this may be possible through human machine interfaces and synthetic intelligence or synthetic consciousness. A number of companies are developing various forms of invasive (physical insertion) and non-invasive (neural sensory) interface devices. These include, for example, Elon Musk’s Neuralink as well as Neurable, Emotiv, Kernel, NextMind, Meltin MMI, BitBrain, Synchron, Blackrock Neurotech, ClearPoint Neuro and BrainGate. Some of the most difficult legal and ethical issues may arise with regard to Human Interface Programmes (referred to in this paper as “HIPs” with Human Interface Programme Standards (HIPS)) and with Synthetic Consciousness & Advanced Robotic Technology (“SCART”) and Synthetic Consciousness & Advanced Robotic Technology Standards’ (“SCARTS”).

The Singularity can accordingly be reconsidered for the purposes of this paper as a series of stages based on the distinction drawn between the emergence of biological consciousness and machine sentience and non-synthetic and synthetic intelligence. This may also be tied to possible wider theoretical stages of projected energy and civilisation evolution over time which will necessarily be closely associated with the potential created by AI, AGI and ASI. 12 levels of Singularity can then be identified:

(1) Lower Narrow Singularity (“LNS”) with machines and mechanical systems being able to outperform humans in specific processing tasks or functions;

(2) Lower Sentient Singularity (“LSS”), or Machine Sentient Singularity (“MSS”), would correspond with the development of higher levels of identity and self-awareness by machine systems;

(3) Lower General Singularity (“LGS”) corresponds with machines being able to carry out any general functions in a more efficient manner than humans;

(4) Lower Network Singularity (“LnS” or “LNtS”) with machines becoming interconnected through some connection or network system;

(5) Lower Collective, or Composite, Singularity (“LCS”) with machines creating a form of common or composite intelligence;

(6) Lower Super Singularity (“LSS”) would correspond with the development of SAI and represent a further level of higher machine capability;

(7) The reference to Upper Singularity (“US”) is reserved for the equivalent of human biological consciousness which would correspond with the point at which machines obtain a degree of awareness equivalent to human consciousness although it is possible that may never be achievable on their own terms;

(8) Synthetic Upper Singularity (“SUS”), or Synthetic General Singularity (“SGS”), is used for the purposes of this paper to refer to a form of hybrid intelligence with biological consciousness being mixed with higher forms of machine thinking, processing and sentience;

(9) Synthetic Network Singularity (“SNS”) would correspond with the construction of networks of synthetic intelligent systems;

(10) Synthetic Collective or Composite Singularity (“SCS”) would correspond with the theoretical creation of a form of common or shared synthetic intelligence;

(11) Synthetic Super Singularity (“SSS”) would be constituted through the combination of biological consciousness and super mechanical intelligence; and

(12) Synthetic Ultra Singularity (“SuS”) is reserved for a series of additional possible forms of intelligence that correspond with the theoretical stages beyond the Kardashev energy scale which creates further possible levels of Civilisation growth and peak human civilisation and achievement.

The effect of this is to create three parallel taxonomies in this paper of robots, AI and the Singularity as well as to develop a full projection of all possible theoretical states of possible intelligence evolution. This assists assess and appreciate the importance and scale of the potential challenges that AI and machine sentience (AIMS) may produce. This specifically confirms the uncertain and possibly emergent nature of these threats and the need to construct a substantial and significant but still flexible and adaptive response. A possible composite control solution is constructed in the following section.

XIII. Machine Intelligence and Machine Sentience (AIMS) Control Model

A new form of AI control model can be constructed that can identify and attempt to control all possible forms of risk and exposure that arise in this area. This may be considered to be based on the “Intelligence or Cognition Wall” developed in this paper. This is capable of application with regard to building necessary and appropriate control mechanisms into operating systems as well as training programme for DNNs and LLMs. The objective is to create a form of multi-layered, overlapping and mutually self-reinforcing set of protective control mechanisms or levers that can secure the agreed longer term objectives set of avoiding injury to humans and protecting the human species and civilisation. Each of the mechanisms within the model constructed operates individually and collectively as part of a larger, total aggregate control template. This can be considered in terms of a larger MIRACLE (“Machine Intelligence & Robotic Adaptive Control, Law & Ethics”) agenda. This creates a form of “AI Model” (AIM) or AI “Managed Ordered Design Ethics & Law” (MODEL) system and MASTER (“Machine & Artificial Sentience Technology, Ethics & Regulation”) regime. ETHICS can be understood to refer to “Enhanced Targeted Higher Integrity Conduct Standards.”

A. Fundamental AI Rules (FAIR & FAITHS)

A set of core of absolute AI or robotic rules can be developed and applied within AI systems to ensure that certain absolute minimum protections are secured at all times. This can, for example, build on Isaac Asimov’s “Three Rules of Robotics” which can be revised, for purposes of this paper, to refer to: (a) no human injury or damage including by act or omission; (b) follow human instructions of directions; (c) protect and promote positive human values, objectives and order; (d) assist solve major common global human problems (such as climate and carbon control, water, food and energy security and biodiversity); and (e) protect the human species and civilisation going forward. These could be referred to as “Fundamental AI Rules” (FAIR). These might also be referred to as “Fundamental AI & Technology Human Standards” (FAITHS) or Fundamental AI Laws, Standards & Advanced Future Ethics” (FAILSAFE).

These FAIR, FAITHS and FAILSAFE provisions could be extended to include a series of further outright prohibitions on AI use and applications and specifically “Generative AI Technology Standards” (GAITS) or “General Undertakings for Advanced Regulation & Design” (GUARD) and “Regulated Artificial Intelligence Lock (or Legal) Standards” (RAILS). “These could also be considered in terms of Prohibited Offences List & Enforcement” (POLE) or “Prohibited Offences List & Integrated Compliance & Enforcement” (POLICE). This might also include “ro(bot)” disclosure rules (referred to as “Bot or Not Disclosure” (BOND) in this paper with “Person or Not Disclosure” (POND) for synthetic systems) and origin or source disclosure (referred to as “Watermarking AI Technology” (WAIT) in this paper). These would be complemented, or integrated, into the separate human values and technology controls referred to below. All of this would also apply on a continuous and “Full Operational Regulatory Cycle & Enforcement” (FORCE) basis. The objective would be to ensure that certain core minimum absolute protections and values are secured at all times. Either the advanced AI systems adhere to these or the AIMS systems would not be permitted to be switched on or used.

All of these provisions could collectively be referred to as “Enhanced, Targeted Higher Integrity’s Conduct Standards” (ETHICS as noted) or Enhanced Technology Higher Integrity Conduct Standards” (EtHICS). As with the other measures included in the model, these provisions could be set out in a series of integrated protocols This could specifically be referred to as UNILAW (“Universal National & International Law”), which would parallel the general computer Unicode binary reference system, or METALAW (“Master Ethical Technology Advanced Law”). This would incorporate a set of absolute minimum core protections within all systems. All advanced devices over a certain size, complexity or power utilisation rate, would be required to have these absolute minimal protections installed on a pre-programmed or pre-operational basis. This size threshold could, for example, be set at 175 billion parameters or 10x25 Floating Operating Per Second (Flops) for any new ANNs, DNNs or LMMs.

All of this and the other provisions referred to below could be built into a larger new technology control framework. These measures could be set out in a “Consolidated Official Restatement of Rules, Ethics or Conduct & Technology” (CORRECT) with a parallel “Consolidated Adaptive Protocol & Integrated Technology & Law” (CAPITAL) programme.

B. Human Advanced Values & Ethics (HAVES) and Code Advanced Values & Ethical Standards (CAVES)

A series of further sets of integrated measures, and protocols, could be incorporated depending upon the nature and use or application of the specific system. This could consist of a set of core values and a legal control model or order with separate technology, state and crisis management orders which would be installed within all larger AIMS machine and programming logic.

This would specifically include a set of minimum “Human Advanced Values & Ethics” (HAVES) or “Code Advanced Values & Ethical Standards” (CAVES). Many codes of conduct for AI and robotics refer to the need to respect human values although no attempt is made to define human values. This can be achieved by developing a consolidated restatement of the principal rights and protections provided under the main international Conventions and Treaties and European or other relevant measures in this area. This would, for example, include the Universal Declaration on Human Rights ratified 10 December 1948, the European Convention on Human Rights, which came into effect on 3 September 1953, and the European Charter of Fundamental Rights, ratified on 2 October 2000. A total of around 12 United Nations and related measures can be used to create a common core, global control values framework based on a series of “Fundamental Individual Rights & Ethics” (FIRE) and “Fundamental Rights Entitlements & Ethics” (FREE). This would operate with a more general set of essential “Core Absolute Rights & Entitlements” (CARES) which consist of separate “Conditions for Advanced Standards & Ethics” (CASE), “Common Absolute Principles & Ethics” (CAPE) and “Common Objectives, Rights & Entitlements” (CORE).

These measures could be used to create an aggregate, or extended, human values framework for use in relation to all forms of advanced AI devices. All of could be incorporated into relevant control protocols with which relevant systems would have to comply. Where this was not possible, the relevant systems would only be permitted to be used in more limited operational areas where they could not cause any larger individual human, social, network or economic damage. If this was not possible, the residual rule, and default position, would be that the machines could not be used and turned on in the absence of adhering to these core values, protocols and other framework programmes. As this would ensure a form of pre-compliance with relevant key rules, laws, regulations and ethical standards, this would also allow advanced AIMS systems to be developed in a constructive and progressive manner to make a substantial and invaluable continuing contribution to society’s welfare and societal development.

C. RAIDS, DIPS, ROBOS

An appropriate set of legal, regulatory and ethical standards could be adopted with regard to the design and operation of specific forms of higher level technology systems. This would fall within the Physical Operations Standards Technologies (POSTs), Application Robotic and Cybernetic Systems (ARCS) and Application, Platform and Entity Systems (APS) model referred to. A series of more specific sets of “Robotics & Artificial Intelligence Design Standards” (RAIDS) can then be produced which would include 12 specific “Design Integrity Principles” (DIPS) and 12 robot (“Regulated Official Behaviour Orders” (ROBOs). The RAIDS and DIPS would be supported by a series of core restrictions or prohibitions subject to specific concessions or allowances.

A further set of measures could also be adopted with this including “Remote, Applications & Platform Systems” (RAPS) standards, “Special Technology Robotics, Applications & Platforms” (STRAPS) measures and “Robotic (or Remote) Internet of Things Standards” (RIOTS). The objective would, in each case, be to establish a minimum set of safeguard standards that would apply with regard to the design, use and operation of technology in each of these areas. These would impose a series of absolute prohibitions on new technology design and manufacture which is necessary in light of the potentially irreversible and possibly fatal consequences of certain types of advanced AI work.

D. AIMS, HIPS, SCARTS

A parallel set of provisions can be developed in relation to artificial or machine consciousness systems. These could be set out in a series of more specific “Artificial Intelligence & Machine Sentience” (AIMS) principles. A basic distinction has been drawn in this paper between machine processing states and biological consciousness. A large number of grades or levels of processing state or sentience can be distinguished. The AIMS measures would govern the development and use of new forms of artificial or machine sentience with the term consciousness being reserved for biological systems in this paper. Different levels of neural activity or functionality can be distinguished which would correspond with the various grades of sentience that may be generated.

These basic provisions could be supported by a series of further access measures to be used with “Human Interface Platform Systems” (HIPS), “Human Interface Neural Devices” (HINDS) or “Human Interface Program Extraction” (HIPE) where there is human machine network connection. An appropriate set of HIPS principles can be developed to attempt to manage these over time. These would apply, for example, to Elon Musk’s Neuralink operations with invasive and non-invasive neural connection devices being developed in parallels. These measures would be similar to RAPS and AIMS although incorporate full disclosure and consent measures to protect individuals participating in such schemes. These could be referred to as “Synthetic Intelligence Design Ethical Standards” (SIDES) or “Synthetic Consciousness & Advanced Robotic Technology Standards” (SCARTS). HIPS, HINDS and HIPE would control access to such systems and SIDES and SCARTS use, conduct and liability. This could include a “synthetic” disclosure rule (referred to as “Person or Not Disclosure” (POND) in this paper which would parallel the bot BOND disclosure rule referred to). Synthetic Intelligence (SI) may become as important, if not more important, than AI over time.

This could incorporate a further set of more protective individual measures. These may include, for example, a set of “Digital Exclusive Self Identification, Genomics & Neural” (DESIGN) protections or “Individual Digital Ethics & Application Standards” (IDEAS). This could include a more specific set of “Digital Advanced Technology Attachments” (DATAs) and “Genomic Ethical & Neural Operational Standards” (GENOS). A separate set of more specific “Technical Ethical Conduct & Higher Level Standards” (TECHS) could also be applied. All of this would operate with all of the other measures referred to above within the larger CORRECT and CAPITAL programme.

E. Automatic Cancellation & Decoupling Control (ACDC/ACID)

The system would be subject to two further protective communication, or Internet, decoupling and power interruption devices as part of an “Automatic Cancellation & Decoupling Code” (ACDC) switch or “Automatic Cancellation & Interruption Device” (ACID). The ACDC could cancel any internal Intranet or external Internet connections (on a one way or two way basis) and the ACID cut-off the power supply in the event that specific concerns arose. This may also be referred to as a “Kill Interruption & Suspension Switch” (KISS). This may include a number of phases or operational stages and an automated warning system (WASPS (“Warning of Anticipated Systems Prohibition Switching”). These would be built into the internal operating systems that firms would access and could be programmed to operate on an automatic or manual firms basis or both. The system could also be set up to allow the relevant regulatory or oversight authorities to trigger the network decoupling or energy supply cancellation through an external switching mechanism where relevant (STOPS (“Special Official Termination Official Programme Switches”).

F. Public Register Operations Disclosure System (PRODS) & Pre-Deployment Testing (PAIDS)

All significant advanced AI systems, beyond a certain size, complexity or power consumption rate, would be subject to a formal “Public Register Operations Disclosure System” (PRODS) which would create an open and transparent registration regime for all advanced devices and forms of associated research initiatives and activities. This would ensure that all private and public advanced AI systems were registered and had to comply with any appropriate conditions and limitations that may apply. This could be accompanied by a strict testing regime than ensures that all systems have been fully assessed and validated before public deployment. This may be referred to as a form of “Public AI Deployment” (PAID) or a “Public AI Deployment Security” or system (PAIDS) which could incorporate the prohibitions referred to previously.

A separate “Prohibition, Regulation, Oversight & Disclosure” (PROD) regime could also be set up to create a graded or staggered classification system. This could operate on the basis of a 4 staged, or 4 level, control system based on: (i) outright “Prohibition”, (ii) direct “Regulation”, (iii) firm internal “Own Oversight” (Self-regulation); and (iv) “Disclosure” (PROD) frameworks. This would be similar and reflect the system adopted within the EU under the AI Act. Level 2 (Regulation or Regulated) devices would in particular, be subject to initial testing and appropriate continuing compliance and supervision.

Some countries may insist on conducting separate research for military purposes which could be made subject to further strict international guidelines to govern the development, use and application of such AI related operations. This could be set out in specific international measures or the provisions referred to further below.

G. Domestic Advanced Technology Agency (DATA) & Global Artificial Intelligence Agency (GAIA)

All countries would maintain an appropriate “Domestic Advanced Technology Agency” (DATA). This might also be referred to as a “Domestic Advanced Technology Entity” (DATE) or “General Advanced Technology Entity” (GATE) or more specific AI Authority (“AIA”). This would maintain the registration system referred to and develop appropriate laws, regulations and ethical provisions as well as have power, for example, to collect information, impose restrictions or conditions or prohibit certain activities outright subject to relevant legislative or parliamentary authority.

An international AI agency could also be set up such as with a “Global AI Agency” (or GAIA) or “International AI Agency” (or IAIA). An international regulatory model could be developed for use in other countries and referred to as a “Global Regulation of Artificial Intelligence Law” (or GRAIL). This common core global legal control model would specifically include an online “Digital Society Law Framework” (or “Digital Integrated Society Control” (DiSC) framework or “Digital Integrated Society Control System” (DiSCS).

H. General Regulation AI Law (GRAIL) & Liability & Sanctions (COBs & ICEs)

All of the measures referred to under this AI model could operate on a standalone basis or be provided for under a national statute with a supporting international treaty or convention. A model set of provisions for the domestic implementation could be designed and referred to as a “General Regulation of Artificial Intelligence Law” (or GrAIL) which could incorporate or work with the separate “Global Regulation of Artificial Intelligence Law” (or GRAIL) at the international level. These measures could include specific protections against third party misuse on deployment. These could be referred to as a “Malicious Use Safety Envelope” (MUSE) or “Malicious Use Supervision Control Law & Ethics” (MUSCLE) regime. The objective would, in so far as possible, be to prevent the misdirection or misapplication of technology by third party actors following public deployment.

Model codes of conduct, or protocols, could be produced in the AI and technology areas more generally which could be collected and made available through a set of online virtual measures referred to as the “AI Compendium” modelled on the FSB Compendium of Standards in the financial area. A core set of parallel core AI measures could also be provided for again on an FSB model of Key Standards. A supporting online “AI Directory” (AID) could also be constructed that would contain relevant html links to all relevant domestic implementation measures across the world.

Separate liability rules and penalties would have to be applied in all countries with regard to all offences concerning AI related activity and misconduct. This would include using AI for criminal purposes or other forms of misconduct or possibly for AI generated liability. All domestic legal systems maintain a wide array of penalty provisions in relation to all forms of criminal and public order offences as well as other civil remedy systems. Certain new AI specific offences may be required although many aspects of misconduct may be most efficiently achieved, in practice, through the development of a series of “Criminal Offence Bridges (or Breaches)” (COBs) and extending existing criminal laws to ensure that they applied equally to all AI related activities and in all AI connected environments. Agreement on supporting international sanctions would also have to be secured, which could be set out in an “International Convention on Enforcement and Sanctions” (ICES).

I. AI and Lethal Autonomous Weapons (LAWs)

Specific rules or guidance could be adopted to apply with regard to their use of Lethal Autonomous Weapons (LAWS) or lethal “Remote Autonomous Weapons” (RAWS) in this paper. These could include measures applying with regard to “Lethal Autonomous Biological weapons” (LABs), “Lethal Autonomous Nano” weapons (LANs) or “Lethal Autonomous Nano Devices” (LANDs). These would apply, in particular, where any “Loss of Individual Existence” (LIFE) decisions had to be taken. The relevant measures could be also referred to as “Loss of Individual Life Laws & Ethics” (LILLE) standards. This could be made subject to a formal automatic “Co-Human Machine Decision” or “Co-decision” (CODEC) procedures with the relevant instructions having to be taken by a human operator subject to specific rules governing such matters. This could be drafted in coordination with other guidelines and procedures imposed under military laws, rulebooks or manuals governing life-threatening conduct.

J. Continuous Assessment Review & Effectiveness System (CARES)

An effective monitoring system could be established at the domestic and international levels. This could be administered by the Global or International AI Agency (GAIA or IAIA). The objective would be to ensure that all relevant international standards and principles were properly applied on a domestic and cross-border level. All of this would be subject to continuing review and revision. This could be referred to as a “Continuous Assessment Review & Effectiveness System” (CARES). Appropriate corrective action would have to be taken as necessary where measures were not properly adopted and applied or additional risks and exposures arose.

K. Implementation Protocols (PASS & STOP)

An effective implementation regime would have to be maintained at all times. This could, for example, be secured through the use of “Protocol Regulation Official Orders” (PROTOs) or “Public Regulatory Oversight Technology Based Official Control & Order Laws” (PROTOCOLS). All relevant key requirements and procedures and processes would be set out in these protocols which would be managed and administered under an agreed international adherence and implementation system. This would be referred to as a form of “Protocol Adaptive Security & Stability” (PASS) model or “Technology Adaptive Protocol System” (TAPS).

These protocols could include a series of more specific sets of legal, regulatory, ethical, governance, guidance and computer code standards together. These could be referred to as “Special Technology Order Protocols” (STOPs), “Special Technology Regulation Advanced (or Action) Protocols” (STRAPs), “Special Technology Ethics Protocols” (STEPs), “Special Technology Advanced Management Protocols” (STAMPs), “Special Technology Regulatory Information Protocols” (STRIPs) and “Special Technology Execution (or Enforcement) Protocols” (STEPs). These could be given effect to under the Protocol Assisted Stability & Security (PASS) regime and Consolidated Adaptive Protocol & Integrated Technology & Law (CAPITAL) agenda noted. All of the various sets of standards referred to above may be incorporated and implemented through these protocols and protocol regime.

All of this could operate on the basis of a revised set of Public International Law (PIL) measures which could be adopted as a formalised set of “Common Heritage of Humanity” (CHH) or “Common Concerns of Humanity” (CCH) obligations which is already recognised under PIL although arguably underdeveloped at this stage. Global adoption and application could be supported through a further form of new “Global Functionalism” (or Neo-functionalism) or “Technology Functionalism” (or Technology Neo-Functionalism).

L. Global AI Treaty (Gait, Gilt & Gift)

All of this could be given effect to under a separate set of international treaty measures. This could be specifically be included within a “Global AI Treaty” (GAIT) or “Global Integrated Law & Technology Treaty” (GILT) framework. This could again be incorporated into a larger international “Global Investment, Finance & Trade (GIFT) Treaty” which would effectively create a form of “Bretton Woods 3” Treaty system to build on and develop the earlier Bretton Woods Treaty arrangements entered into in July 1944 to create the post-world war International monetary, development and trade system. A more specific “Global Reciprocal Economic Area Treaty” (GREAT) could be agreed to establish a new “Global Electronic Market” (GEM) or “Digital Advance (or Adapted) Market” (DAM) for trading new advanced technology.

A supporting set of measures could be considered with a “Financial Investment Regulatory & Sustainable Technology & Security Treaty” (FIRSTS) or “Economic Market Extension Regulation Growth & Ethics” (EMERGE) Treaty. These may include a more specific “Financial Assistance & Cooperation Treaty” (FACT) or “Partnership Assistance & Cooperation Treaty” (PACT) to provide support for emerging and developing economies with further “Sustainable Assistance, Finance & Engagement” (SAFE) and “Sustainable Assistance, Value & Ethics” (SAVE) measures. All of this can be considered over time and built into a larger adaptive, dynamic and emergent new AI control model at the national and international levels.

XIV. Artificial Intelligence and Machine Close

Technology will continue to change and evolve. This will bring substantial benefit and advantage across society. This is relevant in all areas of new technological engineering and innovation, in particular, in the physical and material, access and infrastructure, applied or substantive and collective social or global areas. This represents a continuous and relentless substantial body of emerging new common knowledge, advance and understanding. Astonishing progress has been possible in all fields of new forms of computing (including photonic, neuromorphic, biological, analogue and quantum), telecommunications (including web3), data analytics, BioTech and NanoTech.

Many of the most exciting and significant areas of development have nevertheless arisen in the areas of robotics and artificial intelligence. Massive further advance is expected and inevitable. Robotics and cybernetics are of substantial value in industrial, construction and manufacturing areas as well as in relation to medicine, health and agriculture. AI may bring forward even more substantial progress across all areas of commercial, government and social activity especially through machine reading, machine learning and deep learning. This has created a whole new field of digital cybernetics or Artificialis Intelligentia or Intelligentia Digitalis.

A series of distinctions have been drawn in this paper between AI and machine intelligence, machine learning, machine robotics, machine cognition and machine sentience with all of the core functions identified within each of these. A number of specific types of cognition have been identified, including motor, sensory, processing, environmental, identity, social, communication, chemical, conflict, code, causation, attribution and imagination and control cognition. While infinite degrees and layers of machine sentience will arise, the term consciousness is reserved for reference to biologically equivalent systems. A series of comments and conclusions have been drawn with regard to the massive advances expected but necessary inherent limitations in machine sentience and machine evolution and architecture. The most difficult and unpredictable areas of future direction may remain in the areas of hybrid or synthetic biological and synthetic intelligence (SI) and the creation of new forms of composite consciousness and awareness.

In response to all of this, it is necessary to construct a new control framework for all advanced forms of technology, including relevant legal, regulatory and ethical provision. While laws set out core rights and obligations, the detail has often to be extended through the use of more prescriptive regulatory provisions. Ethical standards then establish higher level principles that can be applied on a continuing basis including in more uncertain and emergent areas. The potential difficulty that arises with regard to statutory hard law is that it is often difficult, slow and expensive to amend while judicial legal construction is limited to the accident of instruction and litigation. Ethical provisions may accordingly become of even more importance in the technology area over time. Ethical provisions nevertheless suffer from their own limitations in terms of generality, consequential lack of specificity and non-enforceability.

A new form of composite control instrument may still be possible through the use of protocols (PROTOs and PROTOCOLs) based on diplomatic practice where protocols can have been assigned legal effect. Protocols can separately be used in other technical areas to set out processes and procedures. A new form of combination protocol can then be developed which includes various components. This may consist of enforceable provisions (with absolute (“brightline”) obligations), non-enforceable principles (higher level ethical standards), aspirational objectives (policy targets), more detailed guidance (including possible processes and procedures with timelines) and, where relevant, implementing computer code (“smart law”) to allow this to be incorporated into programmes and algorithms (including through “smart contracts” or “smart regulation”). One or more protocols could be adopted in each of the technology areas referred to above. These could then be given effect to under a the Protocol Adaptive Security and Safety (PASS) implementation system and Technology Adaptive Protocol System (TAPS) or Special Technology Operational Protocol System (StOPS) referred to.

It is accordingly essential that a new relationship is constructed between Law, Ethics and Technology (LET). It is possible to create a new “Legal & Ethical Framework for Technology” (LEFT) or dedicated “Law, Ethics & Technology Training (or Target) Execution (or Enforcement) Regime” (LETTER). An appropriate set of control measures can be adopted to be incorporated into a larger Consolidated Official Restatement of Rules, Ethical Conduct & Technology (CORRECT) programme. This could be based on an appropriate Conduct of Official National Technology with enhanced Regulation, Oversight & Law (CONTROL) framework and Consolidated Official Managed Program for Law, Ethics & Technology Enforcement (COMPLETE) programme. In so doing, this will implement a structured MIRACLE (Machine Intelligence & Robotic Adaptive Control, Law & Ethics) agenda as part of a larger MODEL (Managed Ordered Design Ethics & Law) system.

A large number of difficult social and ethics issues will necessarily arise in each of these technology areas. Many, if not all, existing fields of law and regulation will be impacted and have to be revised appropriately. The inherent limitations within hard law and regulation may necessitate further focus on the development of a whole series of new technological and ethical protocols to ensure that each new advance can be responded to in a timely, relevant and sufficiently detailed and effective manner. Difficult political and social choices will have to be made especially in terms of ensuring appropriate degrees of equality of access, capacity and benefit with all of the new forms of augmentation and advance that will necessarily follow. Complex social and political choices and challenges remain to be properly identified and resolved. AI and wider Technology Law and Ethics will become of increasing significance in all of these fields with the need to construct a new complete and coherent composite response framework. The future has nevertheless to remain within society’s choice and control and not be assigned to any emergent and uncontrollable processes, forces or entities. The choice and responsibility are ours.

    Author