Feature

Drafting Patent Applications Covering Artificial Intelligence Systems

By Christopher J. White and Hamid R. Piroozi

Published in Landslide Vol. 11 No. 3, ©2019 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.

Artificial intelligence (AI) is this decade’s tech buzzword, and where there’s technology, there are patents. Over 11,000 U.S. patent applications in AI-related areas have published in the last three years alone.1 Your tech-company clients are most likely already developing AI solutions to particular problems and want patents that protect those solutions—and if they are not building AI now, they soon will be! Obtaining a patent covering an AI system presents some challenges common across technology fields, and some unique even within the realm of software patents. In this article, we will guide you through the patent application process for newly invented AI systems. We will leave you better prepared to secure the AI patent coverage your clients expect.

Getty Images

AI is a prominent and expanding research and product area. Tesla, Zoox, and many other groups are working feverishly on self-driving cars. Apple’s Siri, Microsoft’s Cortana, and other digital assistants gain new abilities—seemingly every day—to more effectively understand our commands and carry them out. Netflix and YouTube pour hours and dollars every year into improved “you may also like” recommendation algorithms. These are all examples of modern, big-data AI. Although these examples are in very different domains, they are variations on a common theme: training an AI in the lab, then using the trained AI on live data.

Big-Data AI Extracts Patterns from Large Data Sets

Big-data AI uses algorithms to find subtle relationships in a large set of “training” data. The training process locates those relationships and encodes them into a “model,” such as a neural network. The model can then be used to find relationships between inputs similar to those in the training data. For example, the U.S. Post Office uses a trained handwriting-recognition model to read ZIP codes and house numbers on envelopes.2 The Post Office’s training data dropped into its lap—every letter sent was available for use in developing the model. However, it is not always that easy—a German startup called Viorama had to render artificial images of humans in order to train its model to recognize humans in real images.3

The trained model itself may reside anywhere it can receive inputs and provide outputs. Recommendation models generally run on servers, and interact with the world through users’ web browsers or the web servers those browsers are communicating with. Therefore, the trained model may never be exposed to the public. By contrast, a self-driving car needs to be operable even if it is away from a network connection. Therefore, at least some of the model needs to be stored in the car and available for use.

Get the Most Out of Your Meetings with Inventors

As you can see, a big-data AI system has a lot of different parts that come together to provide a smooth user experience. When you meet with inventors, make sure to find out what the invention actually is. Inventors can improve an AI system’s performance by improving how the training data is collected, how the data is mapped into “features” (the actual inputs of the model), how the model is trained, how data or features are provided to the trained model, or how the model’s outputs are post-processed or interpreted.4 Inventors can also improve the AI’s performance by adjusting the internal structure of the model to fit more effectively with the problem or problem domain. Therefore, inventions could include any of those—or any combination! For example, we have written a number of applications covering both a new model structure and a new way of training that model. Claiming such inventions can be challenging, as we will discuss below. But the prerequisite for even attempting to draft AI claims is understanding clearly which pieces of the AI ecosystem an invention improves.

So how much of that ecosystem do you need to understand to write an AI patent application? The good news is that you do not have to be an AI expert. However, you do need enough background in AI to know what you do not (yet) know. We recommend you read up on various types of models, the distinction between supervised and unsupervised training, the basics of feature engineering, gradient-descent training, genetic-algorithm training, and multibatch techniques for organizing training data. Now, don’t panic—if that last sentence left you feeling overwhelmed, you are not alone. The AI research community is trying very hard to make information about AI accessible to the general public, since AI systems increasingly affect all of us. For example, the University of Helsinki offers a free, online “Elements of AI” course.5 Take advantage of online courses, video tutorials, and reviews to expand the bounds of your knowledge, and to learn what those bounds are.

When speaking with inventors, be prepared to ask lots of questions and to feel ignorant occasionally. You are not alone! The inventors we have worked with have been experts in the field, often even creating the state of the art. Many of them suffer from a “curse of knowledge”6: they have difficulty seeing their inventions from the perspective of someone (a patent practitioner) not as skilled and well-read in AI as they are. We have had good success knowing just enough at first to ask the inventors pertinent questions, and then building our knowledge from there. Going through this process with the inventors also gives you an idea of the kinds of questions an examiner might ask, and permits you to provide the answers in the specification or claims.

Write Claims the Patent Office Will Consider “Patent Eligible”

As with any patent application, your claims must describe technology in a way the patent system will recognize as an invention. A 2004 attempt to patent the plot of a story was finally snuffed out in 2014, proving that there are limits to how far “invention” will reach.7 In the vocabulary of the U.S. Supreme Court’s controlling decision in Alice v. CLS Bank, a “patent-eligible” claim to an invention must not be “directed to an abstract idea,” or must include an “inventive concept” that goes above and beyond an “abstract idea.”8 We think of it this way: a claim to an AI invention must at least have enough meat on its bones to look like the solution to a problem, rather than like wishful thinking. Otherwise, a holding that the claim is “directed to an abstract idea” may be very difficult to overcome.

Unfortunately, Alice left the very definition of “abstract idea” fairly abstract. It is much easier to describe what is not patent eligible under Alice than what is. Some rules of thumb: A claim that performs a preexisting business practice using a computer will likely be ineligible. A claim that can be performed in the human mind will be ineligible. And—most significantly, in our experience—a claim that recites an algorithm that can be implemented on a regular PC will often be ineligible. That last carve-out, unfortunately, is a pit into which a great many AI inventions may fall.

There is some hope, though. The 2016 Federal Circuit decision in Enfish v. Microsoft established that “[s]oftware can make non-abstract improvements to computer technology just as hardware improvements can.”9 The mention of an algorithm in a claim no longer necessarily implies abstractness, as it seemingly did in the first couple of years after Alice. Therefore, when you draft claims, recite operations that cannot reasonably be done mentally. Give the claims technical features that improve something other than people’s lives. Make the claims look as technological as possible. And, to take best advantage of Enfish, include claim features that improve “computer technology,” such as the operation of the computers running the AI system.

For example, a claim to a new way of training an AI system may reduce the amount of memory required while determining the model. That savings of memory is arguably an improvement in computer technology, because it is an improvement directly within a computer system. Therefore, such a new way of training may be different from an ineligible “‘abstract idea’ for which computers are invoked merely as a tool.”10 Similarly, in systems that perform training across multiple computers in parallel, a new way of training may reduce the amount of data that has to be exchanged between those computers. This reduction can be considered an improvement in computer technology, and thus the claim may be patent eligible under Enfish.

The Federal Circuit decision in McRO v. Bandai Namco provides another path to patent eligibility: claim “a specific means or method that improves [a] technology” instead of “a result or effect that itself is the abstract idea and merely invoke[s] generic processes and machinery.”11 Every trained model, when actually used for its intended purpose, involves technical details that can satisfy this requirement. Suppose you are training a model to drive a car. The trained model will require specific inputs—perhaps GPS, a particular configuration of laser or sonar sensors, or multiple accelerometers. That specific input configuration will distinguish your trained model from other models trained to perform similar tasks. Including some details of that input configuration permits you to argue that you are indeed claiming a patent-eligible “specific means or method.”

Finally, as a backstop, give each independent claim at least one dependent claim that recites all the technical detail you can think of. In our experience, it is easier to overcome an Alice rejection of some of the claims than of all of the claims. Including one highly detailed, highly technical claim will improve your odds when arguing for patent eligibility of at least that claim.

Stay Current as the Law of Patent Eligibility Develops

Enfish and McRO are not the only decisions to rewrite many of the rules. In fact, the law of patent eligibility has been changing continually since Alice. How do you keep up with the changes? Well, when the courts significantly change patent law, the U.S. Patent and Trademark Office (USPTO) responds with memos to the examining corps. The memos are publicly available.12 They are not a substitute for the court decisions themselves, but are still useful aids for those drafting and prosecuting applications for AI inventions. Take advantage of the memos to reduce your workload (just do not forget to read the decisions sooner or later!).

USPTO memos to the corps are good for more than just efficiency. We have spoken with numerous examiners who simply do not have enough hours in their day to read the court decisions affecting patent eligibility. However, examiners are generally trained on the contents of the memos. Therefore, you are more likely to succeed with the examiner if your claims meet the requirements of the law as that law is expressed in the memos.

The memos also sometimes cover ground the decisions themselves have not. A recent patent eligibility case, Berkheimer v. HP, related to a motion for summary judgment.13 There is no such proceeding before the USPTO. However, Berkheimer raised the standard for showing that an element in a claim is generic or conventional, which is very relevant to patent prosecution under Alice. The corresponding USPTO memo14 applies Berkheimer to the situation of examiners and applicants. Therefore, arguing Berkheimer before the USPTO is much easier in view of the memo than based on the decision alone. Read the memos and make good use of them!

Patent Eligibility in Europe Follows Different Standards

The confusion of patent eligibility in the United States since Alice stands in stark contrast to the long-established European approach to patent eligibility. Under the European Patent Convention (EPC), software as such is unpatentable.15 However, Europe will permit claims to “computer-implemented inventions,” i.e., inventions that involve a software component together with something else. The “something else” has to provide a “further technical effect”—for example, it has to interact with the real world in some way.16 Almost any AI invention has such an interaction, or else why would it have been invented? For example, AIs that optimize shipping patterns to reduce fuel requirements, AIs that manage network transfers or compress data to reduce bandwidth usage, and AIs that read sensors or drive actuators can all provide a further technical effect.

For AI systems that directly control motors or sensors, the connection between the AI and the real-world advantages is straightforward. For AI systems that primarily analyze data, however, the connection may be more tenuous. Consider an AI model trained to “caption” images, i.e., to describe the contents of the image in a sentence or two. The model’s input is a human-provided image, and the model’s output is human-readable text. To find further technical effect, dig into how the caption is provided. Try to find technological advantages or real-world connections in the implementation details. Explaining the advantages that come from those details, and from other claim features and combinations of features, will give you arguments against rejections for lack of further technical effect (in Europe) or lack of patent eligibility (in the United States).

Europe has a 16-year head start17 on the United States in developing precedent to support its patent eligibility rules, and it shows. Take advantage of the wealth of information available online about further technical effect, and talk to European patent firms. Claim specific features that provide further technical effects, and you will be on the right track to European coverage. Plus, arguments for further technical effect in Europe may give you corresponding patent eligibility arguments in the United States.

Write Claims with the Potential Infringer in Mind

U.S. Infringement Law Is Complicated and Evolving

The U.S. Code18 generally divides infringement into two categories: direct infringement and indirect infringement. The former category is generally accepted as strict liability without concern for knowledge or scienter, while the latter category requires some level of knowledge or intent. Generally—though not always—infringing acts must take place in the United States to infringe a U.S. patent.

A direct infringer is a single entity that commits all the steps of the infringing act, e.g., makes, uses, offers to sell, or sells any patented invention, or imports an implementation of that invention into the United States.19 Therefore, if the claimed invention is a method claim comprising several steps, all those steps must be performed by a single entity.

An indirect infringer can be liable for inducement or contributory infringement. In the former case, whoever actively induces infringement of a patent is liable as an infringer.20 In the latter case, whoever offers to sell or sells a component of a claimed invention, or imports such a component into the United States, is liable as a contributory infringer. However, contributory infringement requires that the contributory infringer know that component to be especially made or adapted for use in an infringement, and not be a staple article of commerce suitable for substantial noninfringing use.21

In either indirect infringement category, a single entity must still commit all the steps of the claimed invention, necessitating direct infringement to show indirect infringement. The indirect infringement statute allows patentees who have been harmed to go after the real culprits of the infringement. For example, a method claim to an AI model training technique may not be infringed until a model is trained using that technique. The law of indirect infringement permits the patent holder to assert such a claim against the party that produced the training software. This is true even if that party does not train models itself, in an attempt to divide infringement between itself and the end users of its software.

However, indirect infringement has its limitations. For example, if a component sold by an alleged contributory infringer is suitable for significantly noninfringing uses, those uses act to negate indirect infringement. To avoid simple division of responsibility in a direct infringement (thereby transforming direct to indirect infringement), the courts have long recognized that, absent an agency relationship between the actors or some equivalent, a party that does not commit all the acts of an infringement is not liable for direct infringement. This was once true even if the parties had arranged to “divide” their acts of infringing conduct for the specific purpose of avoiding infringement liability.22

Faced with this loophole, the Federal Circuit expanded the divided infringement rule from a simple agency rule to a new rule of joint enterprise under divided infringement. Under the old simple agency rule, where a first entity directs or controls a second entity’s acts, the first entity is liable for direct infringement committed by both entities together (vicarious liability). Under the new rule, where two or more entities form a joint enterprise, all can be charged with the acts of the others, rendering each liable for the steps performed by the others as if each is a single actor. Each entity is considered the agent or servant of the others, so the act of any entity within the scope of the enterprise is charged vicariously against the rest.23

Claim with These Complications in Mind

The oldest advice is still the best advice: write each claim so that it covers actions by only one party, without requiring any actions by any other party. For example, a claim reciting use of a trained model, without any requirement for a particular training process, is likely to be infringeable by only one party. The same is true for a claim to only a training process, without use of a trained model. (However, a claim to only a training process, without a practical use case in the claim language, may face an uphill struggle for patent eligibility.) Focusing on a single party avoids the entire question of indirect infringement, and simplifies your eventual infringement arguments.

With AI inventions, remember that there may be more players than you expect. For example, training data may come from numerous sources. If you ever used Google’s former directory assistance service, the sound of your voice became training data for a speech-recognition AI.24 Similarly, Google’s recent CAPTCHAs that ask you to label images are providing training data for an image-recognition AI.25 So even a claim step as simple as “receiving training data” may implicate multiple parties.

Finally, do not forget detectability. A claim does not mean much if you can never find out whether someone has infringed it. Companies often keep their training data sets to themselves, as part of the value of the company.26 To protect that data, the training process often happens inside a company, out of public view. Even deployment of a model does not necessarily make it detectable—for example, you may never see the internals of a model running on a cloud server.

Can an AI System Itself Infringe a Patent Claim?

Infringement of intellectual property by an AI machine presents a new challenge to the rubric of infringement. Several theories of infringement may be relied on to advance an infringement suit against an AI-generated intellectual property. Suppose company X places an AI machine in the field. The AI machine then proceeds to infringe a patented invention. In such a situation, it is more likely than not that company X is unaware of the particular act of the AI machine. Thus, the traditional indirect infringement (e.g., inducement under 35 U.S.C. § 271(b) and contributory infringement under 35 U.S.C. § 271(c)) is likely unavailable.

However, direct infringement under the divided rubric described above may still be an available route. For example, under the principal-agency relationship, company X may be viewed as the principal and the AI machine as its agent. To determine if a single entity directs or controls the acts of another, the Federal Circuit considers general principles of vicarious liability.27 However, an actor infringes vicariously by profiting from direct infringement if that actor has the right and ability to stop or limit the infringement. Thus, in order to prevail under such a theory, the principal (company X) must be in a position to stop the acts of the AI machine. It is possible to divide responsibilities so as to avoid this requirement. Under the joint enterprise rubric, the rule requires a fact-based inquiry grounded on the following elements: (1) an agreement, express or implied, among the members of the group; (2) a common purpose to be carried out by the group; (3) a community of pecuniary interest in that purpose, among the members; and (4) an equal right to a voice in the direction of the enterprise, which gives an equal right of control. These requirements are more difficult to show than the principal-agency aspect of divided infringement. As such, the authors of this article assert that yet a new rule for divided infringement will be needed to address AI-based infringement.

Write a Patent Application That Describes a Technological Development

Because AI inventions have so many moving parts, expect to cover more ground in an AI application than you might in a non-AI application. Include at least enough detail to be able to explain the problem and the solution to an examiner, and that as much as possible solely by reference to the contents of the application. Give the context of the invention, both problem and solution. Explain (even if briefly) the difficulty the AI invention will overcome, how the model will be trained, and how the trained model will be used to overcome that difficulty. This material will help support arguments for patent eligibility, and will give you context to recommend to the examiner before an interview. We suggest you provide all of this in the detailed description section, rather than the background, to avoid any of the context becoming prior art.

Similarly, describe all the pieces of the end-to-end system in which the model will be used. No, you probably do not have to describe the innards of each individual sensor that will feed data to the model. But if you want to claim the whole system, you should at least describe how to put the pieces together.

For European coverage, it is not enough merely to describe the claimed invention itself. Also describe the further technical effects and which features provide them. Unlike in the United States, in Europe you generally cannot mix and match different portions of the application. If you do not clearly associate features of your claims with the further technical effects they provide, you may not be able to overcome a rejection for lack of further technical effect.

Writing a single application that will support both U.S. and European claims is not always easy. For example, unlike European practice, U.S. practice frowns on listing specific advantages of claimed inventions.28 However, we believe you can reasonably say in your application that “using feature X can provide benefit Y.” We often include a paragraph of such statements at the end of each figure’s discussion, or mention the benefits throughout with respect to individual features. Do yourself a favor, though, and never mention the word “invention” anywhere in the application (or during prosecution). Patent law may be about inventions, but that does not mean that anyone should characterize your invention—not even you!

A patent application must also describe the manner and process of making and using the invention.29 You satisfy this requirement differently depending on the nature of the invention. However, almost any big-data AI application will involve a training process and a model. Make sure you explain how the training process works in enough detail that someone could make and use the training process, even if you are not planning to claim the training process itself. Training is arguably an inseparable part of “making” a model-driven invention, so it should not be omitted.30

Formalities for Overseas Prosecution

European and Japanese practice generally require a block diagram of the computer system implementing the AI. Therefore, include such a diagram in every AI application you file. In addition, our understanding is that European applications benefit from showing a breakdown of the software into functional, interconnected blocks. Rather than merely showing a “training” block in the diagram, show the different stages of training and how they interrelate. For example, a “feature extraction” block could feed data to a “mathematical optimization” block. The same can be done for use of the model: a “sensor-reading” block can feed the “feature extraction” block.

Remember, too, that AI systems are often implemented using field-programmable gate arrays (FPGAs), which are specialized reconfigurable hardware. Therefore, do not merely describe the blocks in your block diagram in terms of software; also describe FPGA or hardware implementations. Even software implementations increasingly run on graphics processing units (GPUs) rather than conventional microprocessors, so include GPU embodiments as well.

Finally, to support European prosecution, we often add a section of “example clauses” to the end of the specification. These clauses are a copy of the claims, converted to text paragraphs and with full multiple dependencies added. This permits you to claim more combinations in Europe than you would otherwise be able to. For example, in a European case with claims 2 and 3 both dependent directly from claim 1, the “intermediate generalization” standard may prevent you from claiming a combination of claims 1, 2, and 3. Including example clauses expressly reciting those combinations (in this example, claim 3 dependent from “claim 1 or claim 2” in the clauses) supports claiming those combinations if necessary during prosecution. The example clauses can also include discussing supporting further technical effect arguments.

Conclusion

Big-data AI grows in capability and reach every day. After the dark days of Alice earlier this decade, the outlook for patents on AI systems is bright. So pay attention to the details during disclosure meetings. Draft claims with an understanding of the AI ecosystem. Craft specifications that shine a spotlight on an invention’s merits. Do these, and your patent applications will effectively protect your clients’ AI inventions.

Endnotes

1. Defined as Cooperative Patent Classification class G06N. Result count from PreGrant Publication Database Search Results: cpc/G06N$ and pd/20151001->20181001, http://appft.uspto.gov/netacgi/nph-Parser?Sect1=PTO2&Sect2=HITOFF&u=%2Fnetahtml%2FPTO%2Fsearch-adv.html&r=0&p=1&f=S&l=50&Query=cpc%2FG06N%24%2Band%2Bpd%2F20151001-%3E20181001&d=PG01 (last visited Dec. 11, 2018).

2. Sargur Srihari, Artificial Intelligence at the Post Office and the Police Department (2008), https://cedar.buffalo.edu/~srihari/talks/Telcordia.pdf.

3. Tom Simonite, Some Startups Use Fake Data to Train AI, Wired (Apr. 25, 2018), https://www.wired.com/story/some-startups-use-fake-data-to-train-ai/.

4. See, e.g., Unsupervised Sentiment Neuron, OpenAI (Apr. 6, 2017), https://blog.openai.com/unsupervised-sentiment-neuron/.

5. Elements of AI, https://www.elementsofai.com/ (last visited Dec. 11, 2018).

6. Carl Wieman, The “Curse of Knowledge,” or Why Intuition about Teaching Often Fails, 16 Am. Physical Soc’y News, no. 10, Nov. 2017, https://www.aps.org/publications/apsnews/200711/backpage.cfm.

7. U.S. Patent Application No. US20050282140A1 (filed June 17, 2004) (abandoned).

8. Alice Corp. Pty. Ltd. v. CLS Bank Int’l, 134 S. Ct. 2347, 2352–55 (2014).

9. Enfish, LLC v. Microsoft Corp., 822 F.3d 1327, 1335 (Fed. Cir. 2016).

10. Id. at 1335–36.

11. McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1314 (Fed. Cir. 2016).

12. Subject Matter Eligibility, USPTO, https://www.uspto.gov/patent/laws-and-regulations/examination-policy/subject-matter-eligibility (last visited Dec. 11, 2018).

13. Berkheimer v. HP Inc., 881 F.3d 1360 (Fed. Cir. 2018).

14. Memorandum from Robert W. Bahr, Deputy Comm’r for Patent Examination, USPTO, to Patent Examining Corps, Changes in Examination Procedure Pertaining to Subject Matter Eligibility, Recent Subject Matter Eligibility Decision (Berkheimer v. HP, Inc.) (Apr. 19, 2018), https://www.uspto.gov/sites/default/files/documents/memo-berkheimer-20180419.PDF.

15. Convention on the Grant of European Patents art. 52(c), Oct. 5, 1973, 1065 U.N.T.S. 199, https://www.epo.org/law-practice/legal-texts/html/epc/2016/e/ar52.html.

16. Case T-1173/97, Comput. Program Prod./IBM), 1999 O.J. E.P.O. 609, https://www.epo.org/law-practice/case-law-appeals/recent/t971173ex1.html.

17. Id. This decision was handed down 16 years before Alice.

18. 35 U.S.C. § 271.

19. Id. § 271(a).

20. Id. § 271(b).

21. Id. § 271(c).

22. Akamai Techs., Inc. v. Limelight Networks, Inc., 692 F.3d 1301 (Fed. Cir. 2012).

23. Akamai Techs., Inc. v. Limelight Networks, Inc., 797 F.3d 1020 (Fed. Cir. 2015).

24. Juan Carlos Perez, Google Wants Your Phonemes, InfoWorld (Oct. 23, 2007), https://www.infoworld.com/article/2642023/database/google-wants-your-phonemes.html.

25. “I’m Not a Robot”: Google’s Anti-Robot reCAPTCHA Trains Their Robots to See, AI Bus. (Oct. 25, 2017), https://aibusiness.com/recaptcha-trains-google-robots/.

26. Steven Melendez, Google, Mozilla, and the Race to Make Voice Data for Everyone, Fast Co. (Aug. 24, 2017), https://www.fastcompany.com/40449278/google-mozilla-and-the-race-to-make-voice-data-for-everyone.

27. Akamai, 797 F.3d 1020.

28. See, e.g., Nystrom v. TREX Co., 424 F.3d 1136 (Fed. Cir. 2005).

29. 35 U.S.C. § 112.

30. See Christopher White & Hamid R. Piroozi, Protecting Artificial-Intelligence Systems Using Patent Applications, Young Law., Apr. 25, 2018, https://www.americanbar.org/groups/young_lawyers/publications/tyl/topics/resources-technology/protecting-artificial-intelligence-systems-using-patent-applications/.

Christopher J. White is a U.S. patent agent at Lee & Hayes, PLLC, in Rochester, New York. He specializes in drafting and prosecuting patent applications in telecommunications, artificial intelligence, electronic displays, and cybersecurity. He also uses his experience as an inventor to help researchers and engineers understand the patent system.

 

All opinions are the authors’ and not necessarily those of Lee & Hayes, Purdue University, Indiana University, IUPUI, or any other party. The authors are not affiliated with any of the websites listed in the endnotes.

Hamid R. Piroozi is a professor of law and engineering at Indiana University Law School and Indiana University–Purdue University Indianapolis (IUPUI). He teaches patent-related courses at the law school and a new curriculum related to intellectual property for engineers and scientists at the engineering school.

 

All opinions are the authors’ and not necessarily those of Lee & Hayes, Purdue University, Indiana University, IUPUI, or any other party. The authors are not affiliated with any of the websites listed in the endnotes.

Entity:
Topic: