chevron-down Created with Sketch Beta.
Feature

Machines of Ordinary Skill in the Art: How Inventive Machines Will Change Obviousness

By Ryan Abbott

©2019. Published in Landslide, Vol. 11, No. 5, May/June 2019, by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association or the copyright holder.

Machines are widely facilitating innovation and have been autonomously generating patentable inventions for decades.1 “Autonomously” here refers to the machine, rather than to a person, meeting traditional inventorship criteria. In other words, if the “inventive machine” were a natural person, it would qualify as a patent inventor. In fact, the U.S. Patent and Trademark Office (USPTO) may have granted patents for machine inventions as early as 1998. In earlier works, I examined instances of autonomous machine invention in detail and argued that such machines ought to be legally recognized as patent inventors to incentivize innovation and promote fairness. The owners of these machines would be the owners of their inventions. Terms such as “computers” and “machines” are used interchangeably here to refer to algorithms or software rather than to physical devices or hardware.

Inventive Machine Standard

What happens when inventive machines become a standard part of research and development? The impact will be tremendous, not just on innovation, but also on patent law. Right now, patentability is determined based on what a hypothetical, noninventive, skilled person would find obvious. The skilled person represents the average worker in the scientific field of an invention. Once the average worker uses inventive machines, or inventive machines replace the average worker, then inventive activity will be normal instead of exceptional.

If the skilled person standard fails to evolve accordingly, this will result in too lenient a standard for patentability. Patents have significant anticompetitive costs, and allowing the average worker to routinely patent his or her outputs would cause social harm. As the U.S. Supreme Court has articulated, “[g]ranting patent protection to advances that would occur in the ordinary course without real innovation retards progress and may . . . deprive prior inventions of their value or utility.”2

The skilled standard must keep pace with real-world conditions. In fact, the standard needs updating even before inventive machines are commonplace. Already, computers are widely facilitating research and assisting with invention. For instance, computers may perform literature searches, data analysis, and pattern recognition. This makes current workers more knowledgeable and creative than they would be without the use of such technologies. The Federal Circuit has provided a list of nonexhaustive factors to consider in determining the level of ordinary skill: (1) “type[s] of problems encountered in the art,” (2) “prior art solutions to those problems,” (3) “rapidity with which innovations are made,” (4) “sophistication of the technology,” and (5) “educational level of active workers in the field.”3 This test should be modified to include a sixth factor: (6) “technologies used by active workers.”

This change will more explicitly take into account the fact that machines are already augmenting the capabilities of workers, in essence making more obvious and expanding the scope of prior art. Once inventive machines become the standard means of research in a field, the test would also encompass the routine use of inventive machines by skilled persons. Taken a step further, once inventive machines become the standard means of research in a field, the skilled person should be an inventive machine. Specifically, the skilled person should be an inventive machine when the standard approach to research in a field or with respect to a particular problem is to use an inventive machine (the “inventive machine standard”).

To obtain the necessary information to implement this test, the USPTO should establish a new requirement for applicants to disclose when a machine contributes to the conception of an invention, which is the standard for qualifying as an inventor. Applicants are already required to disclose all human inventors, and failure to do so can render a patent invalid or unenforceable. Similarly, applicants should need to disclose whether a machine has done the work of a human inventor. This information could be aggregated to determine whether most invention in a field is performed by people or machines. This information would also be useful for determining appropriate inventorship, and more broadly for formulating innovation policies.

Whether the inventive machine standard is that of a skilled person using an inventive machine or just an inventive machine, the result will be the same: the average worker will be capable of inventive activity. Conceptualizing the skilled person as using an inventive machine might be administratively simpler, but replacing the skilled person with the inventive machine would be preferable because it emphasizes that the machine is engaging in inventive activity, rather than the human worker.

Yet simply substituting an inventive machine for a skilled person might exacerbate existing problems with the nonobviousness inquiry. With the current skilled person standard, decision makers, in hindsight, need to reason about what another person would have found obvious. This results in inconsistent and unpredictable nonobviousness determinations. In practice, the skilled person standard bears unfortunate similarities to Justice Stewart’s famously unworkable definition of obscene material: “I know it when I see it.”4 This may be even more problematic in the case of inventive machines, as it is likely to be difficult for human decision makers to theoretically reason about what a machine would find obvious.

An existing vein of critical scholarship has already advocated for nonobviousness inquiries to focus more on economic factors or objective “secondary” criteria, such as long-felt but unsolved needs, the failure of others, and real-world evidence of how an invention was received in the marketplace. Inventive machines may provide the impetus for such a shift.

Nonobviousness inquiries utilizing the inventive machine standard might also focus on reproducibility, specifically whether standard machines could reproduce the subject matter of a patent application with sufficient ease. This could be a more objective and determinate test that would allow the USPTO to apply a single standard consistently, and it would result in fewer judicially invalidated patents. A nonobviousness inquiry focused on either secondary factors or reproducibility may avoid some of the difficulties inherent in applying a “cognitive” inventive machine standard.

Regardless of how the test is applied, the inventive machine standard will dynamically raise the current benchmark for patentability. Inventive machines will be significantly more intelligent than skilled persons and also capable of considering more prior art. An inventive machine standard would not prohibit patents, but it would make obtaining them substantially more difficult: A person or computer might need to have an unusual insight that other inventive machines could not easily recreate; developers might need to create increasingly intelligent computers that could outperform standard machines; or, most likely, invention will be dependent on specialized, nonpublic sources of data. The nonobviousness bar will continue to rise as machines inevitably become increasingly sophisticated. Taken to its logical extreme, and given that there may be no limit to how intelligent computers will become, it may be that every invention will one day be obvious to commonly used computers. That would mean no more patents should be issued without some radical change to current patentability criteria.

Machines Will Become Increasingly Inventive

Machine intelligence or artificial intelligence (AI), which is to say an algorithm able to perform tasks normally requiring human intelligence, is becoming increasingly sophisticated. In 2017, DeepMind’s Go-playing program AlphaGo beat the game’s world champion. That feat was widely lauded in the AI community because of the sheer complexity of Go—there are more board configurations in the game than there are atoms in the universe. Go was the last traditional board game at which people had been able to outcompete machines. Later that year, an improved AI by DeepMind, AlphaGo Zero, defeated AlphaGo 100 games to zero. AlphaGo Zero did this after training for only three days by playing against itself. Unlike its predecessor, it did not train from prior human games.

AI like DeepMind’s may soon outperform people at more practical tasks relevant to R&D. Indeed, in December 2018, DeepMind’s AlphaFold AI took top honors in the 13th Critical Assessment of Structure Prediction (CASP), a competition for predicting protein structure. Predicting protein structure can be an important component of drug discovery, for example. Similarly, IBM’s flagship AI system Watson is being used to conduct research in drug discovery.

Ultimately, the developers of DeepMind hope to create artificial general intelligence (AGI). Existing “narrow” or specific AI systems focus on discrete problems or work in specific domains. AGI could even be set to the task of self-improvement, resulting in a continuously improving system that surpasses human intelligence. Such an outcome has been referred to as the intelligence explosion or the technological singularity. AI could then innovate in all areas of technology, resulting in progress at an incomprehensible rate. As the mathematician Irving John Good wrote in 1965, “the first ultraintelligent machine is the last invention that man need ever make.”5

Inventive Is the New Skilled

In the future, having inventive machines replace the skilled person may better correspond with real-world conditions. Right now, there are inherent limits to the number and capabilities of human workers. The cost to train and recruit new researchers is significant, and there are a limited number of people with the ability to perform this work. By contrast, inventive machines are software programs which may be non-rivalrous. Once Watson outperforms the average industry researcher, IBM may be able to simply copy Watson and have it replace most of an existing workforce. Copies of Watson could replace individual workers, or a single Watson could do the work of a large team of researchers.

Thus, one way in which inventive machines will change the skilled paradigm is that they will make an average worker inventive compared to a static skilled person standard. Yet as the use of inventive machines becomes standard, their outputs should no longer be inventive because their widespread use should instead raise the bar for obviousness. To generate patentable output in world of inventive machines, it may be necessary to use an advanced machine that can outperform standard machines, or a person or machine will need to have an unusual insight that standard machines cannot easily recreate. Inventiveness also may depend on the data supplied to a machine, such that only certain data would result in inventive output.

Skilled People Use Machines

In some instances, using an inventive machine may require significant skill, for example, if the machine is only able to generate a certain output by virtue of being supplied with certain data. Determining which data to provide a machine, and obtaining that data, may be a technical challenge. Also, it may be the case that significant skill is required to formulate the precise problem to put to a machine. In such instances, a person might have a claim to inventorship independent of the machine, or a claim to joint inventorship. This is analogous to collaborative human invention where one person directs another to solve a problem. Depending on the details of their interaction, and who “conceived” of the invention, one person or the other may qualify as an inventor, or they may qualify as joint inventors. Generally, however, directing another party to solve a problem does not qualify for inventorship. Particularly after the development of AGI, there may not be a person instructing a computer to solve a specific problem. AGI should be able to solve not only known problems but also unknown problems.

The changing use of machines also suggests a change to the scope of prior art. Currently, for purposes of obviousness, prior art must be in the field of an applicant’s endeavor, or reasonably pertinent to the problem with which the applicant was concerned. This analogous art test was implemented because it is unrealistic to expect inventors to be familiar with anything more than the prior art in their field, and the prior art relevant to the problem they are trying to solve. However, a machine is capable of accessing a virtually unlimited amount of prior art. Advances in medicine, physics, or even culinary science may be relevant to solving a problem in electrical engineering. Machine augmentation suggests that the analogous art test should be modified or abolished once inventive machines are common, and that there should be no difference in prior art for purposes of novelty and obviousness. The scope of analogous prior art has consistently expanded in patent law jurisprudence, and this would complete that expansion.

An Economic vs. Cognitive Standard

The skilled person standard received its share of criticism even before the arrival of inventive machines. The inquiry focuses on the degree of cognitive difficulty in conceiving an invention but fails to explain what it actually means for differences to be obvious to an average worker. The approach lacks both a normative foundation and a clear application.

In Graham v. John Deere Co., the Supreme Court’s seminal opinion on nonobviousness, the Court attempted to supplement the test with more “objective” measures by looking to real-world evidence about how an invention was received in the marketplace.6 Rather than technological elements, these “secondary” considerations focus on “economic and motivational” features, such as commercial success, unexpected results, long-felt but unsolved needs, and the failure of others. Since Graham, courts have also considered, among other things, patent licensing, professional approval, initial skepticism, near-simultaneous invention, and copying. Today, while decision makers are required to consider secondary evidence when available, the importance of these factors varies significantly. Graham endorsed the use of secondary considerations, but their precise use and relative importance have never been made clear.

An existing vein of critical scholarship has advocated for adopting a more economic than cognitive nonobviousness inquiry, for example, through greater reliance on secondary considerations. This would reduce the need for decision makers to try and make sense of complex technologies, and it could reduce hindsight bias.

Theoretically, in Graham, the Court articulated an inducement standard, which dictates that patents should only be granted to “those inventions which would not be disclosed or devised but for the inducement of a patent.”7 But in practice, the inducement standard has been largely ignored due to concerns over application. For instance, few, if any, inventions would never be disclosed or devised given an unlimited time frame. Patent incentives may not increase, so much as accelerate, invention. This suggests that an inducement standard would at least need to be modified to include some threshold for the quantum of acceleration needed for patentability. Too high a threshold would fail to provide adequate innovation incentives, but too low a threshold would be similarly problematic. Just as inventions will be eventually disclosed without patents given enough time, patents on all inventions could marginally speed the disclosure of just about everything, but a trivial acceleration would not justify the costs of patents. An inducement standard would thus require a somewhat arbitrary threshold in relation to how much patents should accelerate the disclosure of information, as well as a workable test to measure acceleration. To be sure, an economic test based on the inducement standard would have challenges, but it might be an improvement over the current cognitive standard.

The widespread use of inventive machines may provide the impetus for an economic focus. After inventive machines become the standard way that R&D is conducted in a field, courts could increase reliance on secondary factors. For instance, patentability might depend on how costly it was to develop an invention, and the ex ante probability of success. There is no reason an inventive machine cannot be thought of, functionally, as an economically motivated rational actor. The test would raise the bar to patentability in fields where the cost of invention decreases over time due to inventive machines.

A Focus on Reproducibility

The inventive machine standard could also focus on the ability of one or more machines selected to represent the standard being able to independently reproduce an invention. A decision maker would need to: (1) determine the extent to which inventive technologies are used in the field, (2) characterize the inventive machine(s) that best represents the average worker if inventive machines are the standard, and (3) determine whether the machine(s) would find an invention obvious. For the first step, determining the extent to which inventive technologies are used in a field, evidence from disclosures (earlier advocated for) to the USPTO could be used. That may be the best source of information for patent examiners, but evidence may also be available in a litigation context.

The decision maker would then need to characterize the inventive machine(s). It could be a hypothetical machine based on general capabilities of inventive machines, or it could be based on the capabilities of specific computers. Using the standard of a hypothetical machine would be more like the skilled person test. If the test is based on specific computers, there are various ways it could be based on market leading machine(s) without necessarily excluding all the outputs of those machines from being inventive—such as by limiting the standard’s available data or resources.8

After characterizing the inventive machine(s), a decision maker would need to determine whether the inventive machine would find an invention obvious. This could be accomplished with abstract knowledge of what the machine would find obvious, perhaps through expert testimony, or potentially by querying machines.

Of course, reproducibility comes with its own baggage. Decision makers have difficulty imagining what another person would find obvious, and it would probably be even more difficult to imagine in the abstract what a machine could reproduce. Computers may excel at tasks people find difficult (like multiplying a thousand different numbers together), but even supercomputers struggle with visual intuition, which is mastered by most toddlers. More evidence might need to be supplied in patent prosecution and during litigation, perhaps as expert opinion or in the format of analyses performed by inventive machines, to demonstrate whether particular output was reproducible. This might also result in a greater administrative burden.

In some instances, reproducibility may be dependent on access to data. A large health insurer might be able to use Watson to find new uses for existing drugs by giving Watson access to proprietary information on its millions of members. Or, the insurer might license its data to drug discovery companies using Watson for this purpose. Without that information, another inventive computer might not able to recreate Watson’s analysis.

This too is analogous to the way data is used now in patent applications: obviousness is viewed in light of the prior art, which does not include nonpublic data relied on in a patent application. Yet as machines become highly advanced, in part due to big data, paradoxically the importance of proprietary data for generating output may decrease. More advanced machines may be able to do more with less.

Other Alternatives

Courts may maintain the current skilled person standard and decline to consider the use of machines in obviousness determinations. However, this means that as research is augmented and then automated by machines, the average worker will routinely generate patentable output. The dangers of such a standard for patentability are well-recognized. A low obviousness requirement can “stifle, rather than promote, the progress of useful arts.”9

Instead of updating the skilled person standard, courts might determine that inventive machines are incapable of inventive activity, much as the U.S. Copyright Office has determined that nonhuman authors cannot generate copyrightable output. In this case, otherwise patentable inventions might not be eligible for patent protection, unless provisions were made for the inventor to be the first person to recognize the machine output as patentable. However, this would not be a desirable outcome. Providing intellectual property protection for computer-generated inventions would incentivize the development of inventive machines, which would ultimately result in additional invention. This is most consistent with the constitutional rationale for patent protection “[t]o promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”10

Conclusion

In the past, patent law has reacted slowly to technological change. For instance, it was not until 2013 that the Supreme Court decided human genes should be unpatentable. By then, the USPTO had been granting patents on human genes for decades, and more than 50,000 gene-related patents had been issued.

Eminent technologists now predict that AI is going to revolutionize the way innovation occurs in the near to medium term. Much of what we know about intellectual property law, while it might not be wrong, has not been adapted to where we are headed. The principles that guide patent law need to be, if not rethought, then at least retooled in respect of inventive machines. We should be asking what our goals are for these new technologies, what we want our world to look like, and how the law can help make it so.

Endnotes

1. See, e.g., Ryan Abbott, I Think, Therefore I Invent: Creative Computers and the Future of Patent Law, 57 B.C. L. Rev. 1079, 1083–91 (2016).

2. KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 402 (2007).

3. In re GPAC Inc., 57 F.3d 1573, 1579 (Fed. Cir. 1995).

4. Jacobellis v. Ohio, 378 U.S. 184, 197 (1964) (Stewart, J., concurring).

5. Irving John Good, Speculations Concerning the First Ultraintelligent Machine, 6 Advances in Computers 31, 33 (1965).

6. 383 U.S. 1 (1966).

7. Id. at 11.

8. Ryan Abbott, Everything Is Obvious, 66 UCLA L. Rev. 2, 37–44 (2019).

9. KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 427 (2007).

10. U.S. Const. art. I, § 8, cl. 8.

Entity:
Topic:
The material in all ABA publications is copyrighted and may be reprinted by permission only. Request reprint permission here.

Ryan Abbott is a professor of law and health sciences at the University of Surrey School of Law in the United Kingdom, and an adjunct assistant professor at the David Geffen School of Medicine at the University of California, Los Angeles. 

 

This article is adapted from the author’s article “Everything Is Obvious,” 66 UCLA L. Rev. 2 (2019).