Like most other countries around the world, African countries are in the initial stage of understanding the costs and benefits of AI, the regulatory landscape, and evaluating future regulations. Smart Africa, an alliance of 36 African countries, developed the 2021 Blueprint: Artificial Intelligence for Africa. The blueprint claims a bold vision for Africa’s opportunities and the ability to address challenges by focusing on five distinct “Framework Pillars,” which are:
- “Human capital, underscoring the importance of educational development and enhancing the proficiencies, competencies and understanding of individuals who use and develop artificial intelligence solutions;
- Lab to Market initiatives that foster research, development, innovation, and commercialization;
- Networking, cooperation, and collaboration, in pursuit of joint partnerships across private and/or public sectors to favorably impact the uptake of AI among all;
- Infrastructure investments that will foster the development of digital and telecommunication systems which support efficient data collection and usage; and
- Regulation that is effective, infused with an ethics that support equality, and international best practices.”
Of import is the fundamental recognition that AI can only be as intelligent as the humans who power it, the human data that drives it, the human need for innovation that brings forth the right questions, and the requirement that our ethical concerns center on the need to support equity in the use of AI.
Safety/Patient Centered Approaches – Canada, Australia/New Zealand
While Canada seeks a position of influence in the world of AI, there is recognition in both the private and government sectors that a normative framework is necessary. Like in many other countries, this has manifested initially in the adoption and consideration of various laws and regulations to protect privacy, with a focus on the future and crafting an appropriate balance between innovation and safety. The government summarizes its approach as one which must:
- “Understand and measure the impact of using AI by developing and sharing tools and approaches;
- Be transparent about how and when we are using AI, starting with a clear user need and public benefit;
- Provide meaningful explanations about AI decision making, while also offering opportunities to review results and challenge these decisions;
- Be as open as [possible] by sharing source code, training data, and other relevant information, all while protecting personal information, system integration, and national security and defen[s]e; and
- Provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI-based public services better.”
This approach is reflected in Canada’s Directive on Automated Decision-Making, which seeks a sensible, middle-of-the-road approach in its administrative AI use “that reduces risks to Canadians and federal institutions, and leads to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law.” Canada’s sensibilities are also reflected in its Algorithmic Impact Assessment tool, which aims to root out bias and inequality on the front end of automated decision-making systems by providing human and ethical measures for developers to build in and test against in order to gain government approval. These frameworks and tools can be used in conjunction with existing laws to provide some protection against increasing inequality and harm in the healthcare space. Existing laws include: The Personal Information Protection and Electronic Documents Act, The Canadian Consumer Product Safety Act, The Food and Drugs Act, The Motor Vehicle Safety Act, The Bank Act, The Canadian Human Rights Act and provincial human rights laws, and The Criminal Code.
Canada has also proposed amendments to its regulations for medical devices that will allow the minister to:
“[A]t any time, impose terms and conditions on a medical device license, or amend those terms and conditions, after considering the following factors:
(a) whether there are uncertainties relating to the benefits or risks associated with the device;
(b) whether the requirements under the Act are sufficient to
(i) maintain the safety and effectiveness of the device,
(ii) optimize the benefits and manage the risks associated with the device, and
(iii) identify changes and manage uncertainties relating to the benefits and risks associated with the device;
(c) whether the proposed terms and conditions may contribute to meeting the objectives set out in subparagraphs (b)(i) to (iii);
(d) whether compliance with the proposed terms and conditions is technically feasible; and
(e) whether there are less burdensome ways to meet the objectives of the proposed terms and conditions.”
Health Canada enforcement priorities are risk-based and stand upon existing departmental policies. Violation of the terms and conditions imposed by the minister could ultimately result in prosecution.
Finally, to close any gaps left by the various laws and regulations that will apply in Canada, there is the proposed Artificial Intelligence and Data Act (AIDA). AIDA, which if passed, is expected to come into force no sooner than 2025, was designed to align with “evolving international norms in the AI space,” noting the regulations proposed in the European Union (EU), the United Kingdom (UK), and the United States as supporting Canada’s need to adopt “a corresponding framework to enable citizen trust, encourage responsible innovation, and remain interoperable with international markets.”
In a healthcare-related example tied to other countries that appear to be taking a patient-centered approach, The Royal Australian and New Zealand College of Radiologists (RANZCR) felt compelled to act on AI years ago, as the benefits to their profession felt both imminent, and at the same time, “getting this wrong for the population in question gives significant potential for harm.” Whether the “population” referred to were those whose lungs were to be compared to algorithms developed based on a population with a higher incidence of smoking, tuberculosis, or opportunistic lung infections, the details of the data matter, and without context, “[t]he unlabeled data on their own… are meaningless, no matter how voluminous the amount of information.” Given then the potential for both enormous benefit and great harm, RANZCR proactively moved to define an approach to AI in the clinical radiology and radiation oncology space in Australia and New Zealand that would first and foremost, be safe for patients. RANZCR’s Ethical Principles for Artificial Intelligence in Medicine call for:
“Principle 1: Safety. The first and foremost consideration in the development, deployment or utilization of ML or AI must be patient safety and quality of care, with the evidence base to support this.
Principle 2: Privacy and Protection of Data. A patient’s data must be stored securely and in line with relevant laws and best practice.
Principle 3: Avoidance of Bias. To minimize bias, the same standard of evidence used for other clinical interventions must be applied when regulating ML and AI, and their limitations must be transparently stated.
Principle 4: Transparency and Explainability. When designing or implementing ML or AI, consideration must be given to how a result that can impact patient care can be understood and explained by a discerning medical practitioner.
Principle 5: Application of Human Values. The doctor must apply humanitarian values (from their training and the ethical framework in which they operate) to any circumstances in which ML or AI is used in medicine, but they also must consider the personal values and preferences of their patient in this situation.
Principle 6: Decision-Making on Diagnosis and Treatment. While ML and AI can enhance decision-making capability, final decisions about care are made after a discussion between the doctor and patient, taking into account the patient’s presentation, history, options and preferences.
Principle 7: Teamwork. To deliver the best care for patients, each team member must understand the role and contribution of their colleagues and leverage them through collaboration.
Principle 8: Responsibility for Decisions Made. The potential for shared responsibility when using ML or AI must be identified, recognized by the relevant party and recorded upfront when researching or implementing ML or AI.
Principle 9: Governance. A hospital or practice using or developing ML or AI for patient care applications must have accountable governance to oversee implementation and monitoring of performance and use, to ensure practice is compliant with ethical principles, standards and legal requirements.”
While noting the benefits of this framework, it is acknowledged that ethical principles, without a framework for accountability, education and training, opportunities for deeper process alignment, and a continuous improvement process are of limited value. To truly harness the power of AI, there must be a continuous focus on and respect for a patient’s autonomy/beneficence, a commitment to non-maleficence, and an equally strong commitment to equity.
The concerns about patient health and safety driving the approaches in more cautious jurisdictions appear to be, in large part, driven by a need to address those issues that most urgently impact patient experiences and satisfaction with AI-related experiences: the potential for bias, opacity and incontestability, and erosion of privacy. AI systems must be designed, from the start, with attention, to and an understanding of, bias, how it currently permeates our healthcare systems (thereby affecting our potential data sets and algorithmic inputs), and the continued need to monitor for and root out bias whenever and wherever it is found in AI systems.
Patchwork of Laws, Guidance, and Initiatives – United States of America
The United States currently has a patchwork of laws, guidance and executive orders from the White House, guidance from various federal agencies, and a mix of state laws that attempt to regulate AI use cases. Like Canada and other countries, many existing laws related to data privacy and consumer protection apply to AI. However, the U.S. has not yet created a comprehensive framework of laws and regulations on AI. A more comprehensive regulatory framework may be created in the near future if Congress passes such legislation. However, any such proposed legislation has not yet picked up steam. While not an exhaustive review of all U.S. laws and guidance on AI, following is a select sampling of such information to provide an overview of the existing U.S. AI healthcare regulatory framework.
The U.S. Congress passed the National Artificial Intelligence Initiative Act (NAIIA) of 2020 on January 1, 2021. This law provides for a coordinated program across the federal government to accelerate AI research and application for economic prosperity and national security. Following passage of the NAIIA, the National Artificial Intelligence Initiative was created with the main purposes of ensuring continued U.S. leadership in AI research and development; leading the world in the development and use of trustworthy AI systems in public and private sectors; preparing the present and future U.S. workforce for the integration of artificial intelligence systems across all sectors of the economy and society; and coordinating ongoing AI activities across all federal agencies. Under implementation of this initiative, a new office was created under the White House Office of Science and Technology Policy (OSTP) that directs the president in reaching the aforementioned goals and supports the NAIIA. This law and initiative, while significant, focus on coordination and strategy at the federal agency level primarily in relation to national security and the economy.
While the NAIIA provides direction and coordination across the federal government on AI, the U.S. does not currently have a framework that consists of a set of laws and regulations that govern the development and use of AI nor one that addresses AI in healthcare broadly. Medical devices, including AI/ML-enabled medical devices, are regulated by the Department of Health and Human Services (HHS) U.S. Food and Drug Administration (FDA) in accordance with the Federal Food, Drug, and Cosmetic Act. However, the FDA’s traditional regulatory paradigm for medical device regulation is not well-suited for adaptive AI and ML technologies, and accordingly, the FDA has issued a discussion paper on a proposed regulatory framework and a number of other related guidance on the topic, seeking stakeholder input. A more comprehensive AI/ML in medical devices regulatory framework may be forthcoming.
The HHS Office of the National Coordinator for Health Information Technology (ONC) Health IT Certification Program (Certification Program) is a voluntary certification program established by the ONC to provide for the certification of health information technology. Requirements for certification are established by standards, implementation specifications, and certification criteria adopted by the Department of Health and Human Services (HHS). The ONC recently issued a proposed rule, published on April 18, 2023, on health data, technology, and interoperability. The preamble recognizes that “the U.S. healthcare industry does not have universally applicable, consistently applied framework(s), best practices, or norms for transparency about technical and performance aspects and organizational competencies (e.g., model risk management) in place for [decision support interventions]. In the proposed rule, ONC proposes to rename the existing “clinical decision support” (CDS) certification criterion to “decisions support interventions” (DSIs) and introduce transparency requirements under this criterion. The proposal introduces “information transparency to address uncertainty regarding the quality of predictive DSIs that certificated Health IT Modules enable or interface with, so that potential users have sufficient information about how a predictive DSI was designed, developed, trained, and evaluated to determine whether it is trustworthy.” ONC also proposed requirements that would enable users to know when a DSI uses demographic, social determinants of health assessment data. While the voluntary certification requirements apply only to health information technology, and not all AI/ML-enabled healthcare products, health information technology is nonetheless widely used. The proposed DSI requirements may thus have broad impact, albeit on a subset of AI/ML technology in healthcare, if this proposal is finalized.
Furthermore, on the regulatory front, the HHS Office for Civil Rights (OCR) recently published a notice of proposed rulemaking (NPRM) to revise its regulations on nondiscrimination in health programs and activities. OCR issued this proposed rule regarding Section 1557 of the Affordable Care Act (ACA), which prohibits discrimination on the basis of race, color, national origin, sex, age, or disability in certain health programs and activities. If finalized, the proposal would make explicit that covered entities are prohibited from discriminating through the use of clinical algorithms on the bases prohibited by Section 1557. OCR sought comment on whether to limit this provision to clinical algorithms or to include other forms of automated or augmented decision-making tools or models, such as AI/ML. OCR is expected to respond to public comments and determine in future rulemaking whether and how to expand the nondiscrimination protections to AI/ML decision-making tools.
In October 2022, the White House released the Blueprint for an AI Bill of Rights—Making Automated Systems Work for the American People. This publication was a signal to the industry and to Congress that additional consumer protections and safeguards against the harms of AI are needed now. It sets forth a framework for consumer protections and considerations, described in more detail below. The White House has provided other guidance and directives as well, including release of an Executive Order (EO) in February 2023 entitled “Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government.” This EO directs federal agencies to “promote equity in science and root out bias in the design and use of new technologies, such as artificial intelligence.” It further states that “[w]hen designing, developing, acquiring, and using artificial intelligence and automated systems in the Federal Government, agencies shall do so, consistent with applicable law, in a manner that advances equity.” Through these actions, the administration is guiding the federal government on how to responsibly use AI.
The AI Bill of Rights applies to AI across sectors, providing a national values statement and toolkit to help build protections into technological design processes and to inform policy decisions. The Blueprint for an AI Bill of Rights outlines five principles to govern automated systems:
- Safe and Effective Systems: Individuals and communities should be protected from unsafe or ineffective systems; such systems should be developed with consultation from diverse stakeholders and experts and should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring. Individuals should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems as well as from the compounded harm of its reuse.
- Algorithmic Discrimination Protections: Algorithms and systems should be designed in an equitable way and should not disfavor individuals based on classifications protected by law (e.g., race, color, ethnicity, sex, religion, age, national origin, disability, veteran status, or genetic information). AI system developers should use proactive and continuous measures to guard against algorithmic discrimination, including equity assessments and algorithmic impact assessments featuring both independent evaluation and plain language reporting. Healthcare clinical algorithms that are used by physicians to guide clinical decisions may include sociodemographic variables that adjust or “correct” the algorithm’s output on the basis of a patient’s race or ethnicity, which can otherwise lead to race-based health inequities.
- Data Privacy: Individuals should be protected from abusive data practices via built-in protections, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Automated systems developers are encouraged to seek consent before using personal data. Consent should only be used to justify data collection in cases where it can be “appropriately and meaningfully given.” If it is not possible to obtain consent in advance, developers are encouraged to implement privacy by design safeguards. Data in sensitive domains, including healthcare-related data, should be subject to enhanced protections and restrictions.
- Notice and Explanation: AI system developers should provide timely and accessible descriptions in plain language to describe overall system functioning and the role automation plays, notice that automated systems are in use, the individual or organization responsible for the AI system, and explanations of outcomes. Automated systems should provide explanations that are technically valid, meaningful, and useful to operators of the system.
- Human Alternatives, Consideration, and Fallback: Individuals should be able to opt out from automated systems in favor of human alternatives, where appropriate or required by law. Appropriateness should be determined based on reasonable expectations in a given context in addition to ensuring broad accessibility and protecting the public from especially harmful impacts. Automated systems with an intended use within sensitive domains should be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.
This Blueprint provides a framework for future legislation and regulations on AI and the safeguards that should be in place to protect consumers’ privacy, freedom of choice, and protect consumers against biases. The Blueprint rightly calls for heightened consumer protections in circumstances where AI is used in healthcare.
While regulatory gaps remain at the federal level, state and local law is beginning to step in to fill those gaps. For example, New York City passed Local Law 144, which would require employers and employment agencies to conduct a bias audit on any automated employment decision tools they intend to use. A California assembly member recently introduced a bill to combat algorithmic discrimination by automated tools that make consequential decisions. A number of other states have recently proposed similar legislation. Unless and until the U.S. Congress enacts comprehensive legislation, we may continue to see states take steps to fill in gaps in AI laws and regulations.
In contrast to the United States, the European Union is well on its way to passage of comprehensive AI regulatory regime.
Emerging Leadership – European Union
The European Union (EU) is on the verge of creating a comprehensive, far-reaching regulatory regime for AI through approval of the proposed Artificial Intelligence Act (AI Act), a proposed law over two years in the making. On June 14, 2023, the EU took yet one more step toward passage of the sweeping legislation: the European Parliament, a main legislative branch of the EU, passed a draft of the AI Act. As with comprehensive data privacy protections the EU passed under the General Data Protection Regulation (GDPR), the AI Act is thorough and is a proactive and unified effort by the member states of the EU to shape the industry and create corporate accountability. The AI Act is expected to pass in late 2023 and as the first of its kind, may set the standard on AI regulation on a global scale . The earliest the law would likely apply is the second half of 2024.
The law assigns applications of AI to four risk categories: unacceptable risk, high risk, limited risk, and minimal risk. AI systems considered to be a clear threat to safety and the livelihood and rights of people will be deemed to have unacceptable risk and will be banned. An example of such AI is social scoring by governments (i.e., classifying individuals based on behavior, socioeconomic status, or personal characteristics). High-risk AI includes critical infrastructures that could put the life and health of citizens at risk, education training that may determine access to education, and safety components of products, such as robot-assisted surgery. High-risk AI systems will be subject to strict obligations before they can go to market.
Limited-risk AI refers to systems with specific transparency obligations that would allow users to make informed decisions. For such technology, users must be informed that they are interacting with a machine to be able to make an informed decision on whether or not to use the system. Lastly, minimal- or no-risk AI allows the free and largely unregulated use of AI. Examples of minimal- or no-risk AI include AI-enabled video games or spam filters.
The four risk tiers are each subject to different constraints and requirements. Developers can generally satisfy the requirements by complying with the technical standards that are currently being formulated by European standards-setting bodies.
The proposed AI Act “focuses primarily on strengthening rules around data quality, transparency, human oversight, and accountability. It also aims to address ethical questions and implementation challenges in various sectors ranging from healthcare and education to finance and energy.” The scope of the AI Act is expansive and extraterritorial: it applies to providers and users of AI outside of the EU when the system output is used in the EU.
Perhaps most importantly, the AI Act has teeth. As it currently stands, it contains strikingly high fines—the greater of up to €40 million or 7% of the company’s total worldwide annual turnover for the preceding financial year. This large scope and penalty system will shape behavior outside of the EU and impact companies worldwide. Its strong penalties may curb or slow development in AI. However, given the potential for AI’s exponential growth, a slow start may be prudent, with adjustments along the way.
Patchwork Approaches – United Kingdom, China
With the 2020 withdrawal of the United Kingdom (UK) from the EU, the UK is working on its own regulatory approach to AI. On March 29, 2023, the UK government’s Department for Science, Innovation, and Technology and Office for Artificial Intelligence released a white paper detailing its plan for implementing a pro-innovation approach to AI regulation and seeking input through consultation. It states that it seeks to be a leader in this area. The approach is underpinned by five principles intended to guide how regulators approach risk:
- Safety, security, and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
The white paper notes that currently, AI technologies in the UK are regulated by a “complex patchwork of legal requirements.” The creation of an approach to AI regulation was prompted in part by a concern that the absence of cross-cutting AI regulation creates uncertainty and inconsistency, which can undermine business and consumer confidence in AI, stifling innovation.
The existing patchwork of laws in the UK that provide some coverage of AI issues include the Equality Act of 2010, which provides consumer protections against discrimination. Medical device laws similar to those in the U.S. exist in the UK and regulate some products that include integrated AI. Consumer rights laws may offer protection to consumers where they have entered into sales contracts for AI-based products and services.
The framework sets out to engage industry, the public sector, regulators, and other stakeholders. Among other things, the government will work to design and publish an AI Regulation Roadmap with plans for establishing the central functions, including monitoring and coordinating implementation of the principles. The UK approach comes in stark contrast to the EU approach: in the UK’s press release for its white paper, the government makes clear that it “will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators […] to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.” The UK’s regulatory approach to AI, while lighter than that of the EU, may ultimately be subsumed by the EU’s AI Act given the law’s potential reach.
Asia does not have a singular, unified approach to regulating AI. However, China is taking an active role to regulate specific types of AI algorithms and capabilities and is doing so on a rolling basis. It is one of the first countries in the world to do so. In contrast to the EU’s horizontal approach to AI regulation that uses a single piece of legislation to regulate across the industry, China’s approach, led by the Cyberspace Administration of China, is vertical, applying to common or risky use cases. The first set of regulations target generative AI, focusing on algorithms that make recommendations and use deep synthesis technology.
Finalized on July 13, 2023, as “interim” measures, and set to go into effect on August 15, 2023, China’s AI regulations were altered from the draft to soften their impact and demonstrate support for innovation by, in part:
- only requiring those who are developing public facing products to submit security assessments (i.e., companies working on enterprise/internal facing products would not have the same hurdles);
- removing language that required a “three-month waiting period for ‘improving model training and other methods to prevent recurrence’ of content that violates the guidelines”;
- removing draft fines of up to 100,000 yuan (approximately $14,000.00 USD); and
- providing exemptions for companies in China that want to provide generative AI products to markets outside of China, while ensuring that foreign companies wishing to provide generative AI products in China are subject to the regulations.
Regulations that remained intact from the draft version released in April include:
- in processes such as algorithm design, selecting training data, and model generation and model optimization, measures are required to be in place to prevent discrimination on the basis of race, ethnicity, religion, and nationality;
- content generated through the use of generative AI is required to be true, accurate, and measures must be adopted to prevent generation of false information;
- consent is required for use of personal data for generation of AI product pre-training and optimization training; and
- developers must register their algorithms, allowing regulators to review the algorithms and information such as the training data used and security risks.
While China has mandated that generative AI products must adhere to “core socialist values,” the final regulations also clearly reflect the government’s goal to help Chinese companies gain an advantage in the global technological AI race.
Global Perspective
AI regulation across the globe is in its infancy, as shown in the above overview. AI technology is also in its infancy but, by its very nature, is positioned to take off exponentially at a rate that is likely to quickly outpace the development of laws and regulations. The US, China, Australia, and EU, among other governmental bodies, have positioned themselves to lead this space, both technologically and in the development of a regulatory framework. The development and evolution of these frameworks will have significant consequences on both technological innovation and on consumer populations across the world.
The European Union and China have moved quickly to develop contemporary laws and regulations in this space. The EU’s proposed regulatory framework is likely to have the strongest impact regionally and globally, given the comprehensiveness, scope, and scale of its proposed law. Countries in less developed regions appear to be taking a wait-and-see approach, allowing the developed countries to move forward first in order to assess the effectiveness, and implications, of their regulatory actions. Whether AI is developed in a particular country or not, all countries should consider implementing regulations that will protect their citizens against the harms of AI used and sold within its borders. Louder, more frequent calls for global regulation have begun, and global leaders are taking notice.
The Intersection of AI and Health Equity
Of the various AI frameworks, principles, and proposed regulations examined, what then are the mechanisms likely to bolster political will, ensure accountability, and create space for marginalized voices in the healthcare arena? What are the best ways to engage people who are rightfully concerned about bias, transparency, and privacy? What actual protections can be relied upon in the face of permissive “blueprint” documents that have no enforcement mechanisms? Undoubtedly, there will be a plethora of new regulations, guidelines, principles, and frameworks to come as AI evolves. The extent to which they advance health equity will depend upon the extent to which they regulate AI and address known inequities in healthcare. This section will offer suggestions through the gateway of data to discuss concerns identified in this article.
Representative Big Data – The Gateway to Equitable AI
While there are many pieces of the AI puzzle that will require insight, innovation, regulation, and continuous improvement, perhaps none is more important at the outset than data. “Big data” is a term that describes “large amounts of data that is unmanageable using traditional software or internet-based platforms…, which surpasses the traditionally used amount of storage, processing, and analytical power.” The term involves data that has a high volume, is generated with great velocity, and contains many varieties—attributes that all apply to healthcare information today, especially in HIC. To produce more accurate, unbiased, and representative information, AI/ML tools must be trained with high-quality, representative data collected from across all demographics. The quality of the output is limited by the quality of the input. How can stakeholders support data interoperability (the ability of different information systems, devices, and applications to access, exchange, integrate, and cooperatively use data in a coordinated manner) not only within a country, but across countries, while at the same time supporting new methods of data collection and engagement that allow for the collection of data across all populations of people? Particularly in LMIC, there is a need not only for data, but also for the technological infrastructure to support its collection, storage, management, and safety.
One novel example of a way to collect and share healthcare data is the NIH’s All of Us program, which is designed to amass diverse healthcare data for use in research, in collaboration with public and private partners. Core values include:
- “Participation is open to all. Enrollment is open to all eligible adults who live in the United States. People of every race, ethnicity, sex, gender, and sexual orientation are welcome. No health insurance is required. You can be healthy or have health issues. You can sign up directly through JoinAllofUs.org or through participating healthcare provider organizations. In the future, children will be able to join.
- Participants reflect the rich diversity of the United States. To develop individualized plans for disease prevention and treatment, researchers need more data about the differences that make each of us unique. Having a diverse group of participants can lead to important breakthroughs. These discoveries may help make healthcare better for everyone.
- Participants are partners. Participants shape the program with their input and contribute to a project that may improve the health of future generations. They may also learn about their own health.
- Transparency earns trust. We inform participants about how their data are used, accessed, and shared. Participants can choose how much information to share.
- Participants have access to their information. All of Us lets participants see their own information and records.
- Data are broadly accessible for research purposes. All of Us makes information about participants as a group available in a public database. Everyone can explore the database or use it to make discoveries. Data from individual participants are also available, but only for researchers who apply and are approved. Any personal information that identifies a participant, such as name or address, is removed from data that researchers can access.
- Security and privacy are of highest importance. Data are stored in a secure, cloud-based database. All systems meet the requirements of the Federal Information Security Management Act. Ongoing security tests help protect participant data. Learn more about how the All of Us Research Program protects data and privacy.
- The program will be a catalyst for positive change in research. Working together, All of Us researchers, partners, and participants can build a better future for health research and care.”
All of Us is designed to collect data that can positively affect the SDOH by integrating biological data with environmental and lifestyle data to provide researchers a more meaningful and appropriate dataset upon which to build not only precision medicine solutions that treat cancer, for example, but many other diseases as well. Further, it is hoped that the program can support research insights into healthier living generally, without reference to disease or treatment. All of Us is nearly halfway to its goal of a million participants, with over 409,000 participants included as of February 2023.
How can individuals be incentivized to join this or similar programs in order to ensure the data is representative, and what benefits will participants see from their participation? How can we support program participation by those who have fewer resources? How can resources be marshalled to support the collective development of these types of initiatives globally?
Strengthening Political Will
Political will, “the process of generating resources to carry out policies and programs… based on public understanding and support,” will be key in driving the development of, and access to, the data that will drive AI innovations in healthcare. In this sense, forming a comprehensive, insightful, and impactful AI approach is best understood, at the outset, as an educational exercise that must create understanding and empower all stakeholders in the system to play their role in a way that supports health equity. Without the proper foundational understanding of what is to be regulated, the potential risk and rewards, and an appreciation for the many ethical dilemmas that will arise, there will be little political support to shepherd stakeholders through the necessary process to build an appropriate regulatory framework. Building political will starts with the education of all: policymakers, legislatures, lobbyists, and government; researchers and students; all communities, but especially marginalized communities; influencers and champions for accountability; regulators; service providers; healthcare systems; and other support organizations that stand to benefit from the use of AI in healthcare. What is clear is that political will is intentionally developed over time and is not the province of any particular stakeholder—it is a journey that all must take together if there are to be beneficial results.
How can the general population be educated about AI and its effect on healthcare? What can major stakeholders do to expand opportunities for programmers, health monitors, researchers, data scientists, and algorithmic developers who will be needed in droves, to ignite the political will for equitable AI in healthcare the world over? Broad support for programs such as All of Us, that reach populations across all demographics and regions, will hopefully support new research breakthroughs, strengthening political will by demonstrating the benefit of a more inclusive and data-driven approach to developing healthcare research data and creating more equitable health outcomes.
Accountability and Transparency
Accountability must be examined anew to create a comprehensive approach to review, assess, and implement change management with respect to AI use in the healthcare space. Regulation cannot, as is often the case, lag far behind the pace of technological advancement, leaving the public, especially those who are traditionally marginalized, left behind and at risk. As the backbone of accountability is transparency, developing best practices and standards around transparency will be crucial. Although there is sure to be variation from jurisdiction to jurisdiction, transparency measures/metrics over time—especially those that can be standardized—will drive a more cohesive understanding of the technology itself, which will support accountability and political will among stakeholders.
Elevation of Marginalized Voices
At the most basic level, investment in AI is truly about investment in people. All people, and especially historically marginalized groups, need to contribute to and benefit from the use of AI in healthcare. To promote the equalization of playing fields, a portion of resources in LMIC, perhaps from investment from HICs and non-profits, should be directed toward the education system, allowing for the youth to become trained in programming and engineering, allowing these countries to be active participants in AI’s development, uses, and thus benefits. This includes support for the education of internal and external data monitors, scientists, and researchers, as well as government investment in the education of developers, programmers, and coders in LMIC. Developers across countries, and particularly in HICs, should be educated on responsible and ethical design of AI and implications of biases and unrepresentative data.
Who Owns Data?
Consideration of two very different paradigms on the use of data as a society could offer ideas on how to regulate the fast-growing world of AI/ML in healthcare. Pursuing AI in healthcare as a common good, from inception to use and beyond, could provide wide access to developers and others with the least amount of government intervention or regulation. Consideration of data as a public good that can be utilized with broad public consent may be a reasonable approach to data protection, depending on how governments define the “public good,” whether the definition changes over time, how the public is educated, and how well data anonymization can deflate privacy concerns. Conversely, healthcare data could be viewed as an individual asset that belongs to the individual from whom the data was derived. Under such a paradigm, individuals would own, control, and potentially monetize their own data, readjusting incentive structures and dynamics. While individual ownership and control could more equitably distribute economic power and control to individuals as it relates to personal data, that model should be premised upon individual knowledge, access, and accountability to support truly equitable outcomes.
Certainly, there are pitfalls with either approach: a common public good is dependent on trust and corporate responsibility as individuals do not play a direct role in how their data is shared under such a model; meanwhile, an individualized approach is far removed from our current system, and the political feasibility of implementing such an approach may be low as a result. Both models have positive attributes from which future regulations can be drawn, but any decisions should be contextualized and implemented within the larger context of our existing norms and systems in order to be sound. No matter the scheme, privacy-by-design systems should be employed in all healthcare applications where patient data is involved to provide the greatest level of trust at the outset.
Conclusion
Ready or not, AI has taken off exponentially, and perhaps may ultimately surpass human intelligence. Global AI regulation, which is in its infancy—must develop at an unprecedented pace, with global collaboration and alignment, to appropriately grapple with numerous ethical, legal, business, and policy implications of AI’s promise and potential peril. To support the advancement of health equity to its fullest potential, all stakeholders must work together; deidentified health data must be readily available for analysis by the public and private sectors; such data must be representative of all populations, with particular attention and efforts made to gather data on marginalized populations; and such data must be transparently managed and protected, with the fruits of the data’s analysis distributed and accessible to all. AI necessitates collective collaboration and perhaps, a reorientation of healthcare and education as common goods, to fulfill our collective highest potential and avoid widening existing economic and health disparities across the globe.