chevron-down Created with Sketch Beta.

Landslide®

Landslide® Article Archives

Drafting Patent Applications Covering Artificial Intelligence Systems

Christopher J White and Hamid Piroozi

Summary

  • Claims must describe technological inventions without being abstract, incorporate specific improvements to computer technology, and include detailed dependent claims.
  • Claims should cover actions by a single party to avoid indirect infringement issues, keeping in mind the challenges of detectability and multiple parties when AI is involved.
  • AI patent applications require detailed descriptions of the problem and solution and clarity on the training process to support eligibility.
  • Patent eligibility requirements in Europe differ, including a demonstration of further technical effect and inclusion of block diagrams in AI applications.
Drafting Patent Applications Covering Artificial Intelligence Systems
skynesher via Getty Images

Jump to:

Artificial intelligence (AI) is this decade’s tech buzzword, and where there’s technology, there are patents. Over 11,000 U.S. patent applications in AI-related areas have published in the last three years alone. Your tech-company clients are most likely already developing AI solutions to particular problems and want patents that protect those solutions—and if they are not building AI now, they soon will be! Obtaining a patent covering an AI system presents some challenges common across technology fields, and some unique even within the realm of software patents. In this article, we will guide you through the patent application process for newly invented AI systems. We will leave you better prepared to secure the AI patent coverage your clients expect.

AI is a prominent and expanding research and product area. Tesla, Zoox, and many other groups are working feverishly on self-driving cars. Apple’s Siri, Microsoft’s Cortana, and other digital assistants gain new abilities—seemingly every day—to more effectively understand our commands and carry them out. Netflix and YouTube pour hours and dollars every year into improved “you may also like” recommendation algorithms. These are all examples of modern, big-data AI. Although these examples are in very different domains, they are variations on a common theme: training an AI in the lab, then using the trained AI on live data.

Big-Data AI Extracts Patterns from Large Data Sets

Big-data AI uses algorithms to find subtle relationships in a large set of “training” data. The training process locates those relationships and encodes them into a “model,” such as a neural network. The model can then be used to find relationships between inputs similar to those in the training data. For example, the U.S. Post Office uses a trained handwriting-recognition model to read ZIP codes and house numbers on envelopes. The Post Office’s training data dropped into its lap—every letter sent was available for use in developing the model. However, it is not always that easy—a German startup called Viorama had to render artificial images of humans in order to train its model to recognize humans in real images.

The trained model itself may reside anywhere it can receive inputs and provide outputs. Recommendation models generally run on servers, and interact with the world through users’ web browsers or the web servers those browsers are communicating with. Therefore, the trained model may never be exposed to the public. By contrast, a self-driving car needs to be operable even if it is away from a network connection. Therefore, at least some of the model needs to be stored in the car and available for use.

Get the Most Out of Your Meetings with Inventors

As you can see, a big-data AI system has a lot of different parts that come together to provide a smooth user experience. When you meet with inventors, make sure to find out what the invention actually is. Inventors can improve an AI system’s performance by improving how the training data is collected, how the data is mapped into “features” (the actual inputs of the model), how the model is trained, how data or features are provided to the trained model, or how the model’s outputs are post-processed or interpreted. Inventors can also improve the AI’s performance by adjusting the internal structure of the model to fit more effectively with the problem or problem domain. Therefore, inventions could include any of those—or any combination! For example, we have written a number of applications covering both a new model structure and a new way of training that model. Claiming such inventions can be challenging, as we will discuss below. But the prerequisite for even attempting to draft AI claims is understanding clearly which pieces of the AI ecosystem an invention improves.

So how much of that ecosystem do you need to understand to write an AI patent application? The good news is that you do not have to be an AI expert. However, you do need enough background in AI to know what you do not (yet) know. We recommend you read up on various types of models, the distinction between supervised and unsupervised training, the basics of feature engineering, gradient-descent training, genetic-algorithm training, and multibatch techniques for organizing training data. Now, don’t panic—if that last sentence left you feeling overwhelmed, you are not alone. The AI research community is trying very hard to make information about AI accessible to the general public, since AI systems increasingly affect all of us. For example, the University of Helsinki offers a free, online “Elements of AI” course. Take advantage of online courses, video tutorials, and reviews to expand the bounds of your knowledge, and to learn what those bounds are.

When speaking with inventors, be prepared to ask lots of questions and to feel ignorant occasionally. You are not alone! The inventors we have worked with have been experts in the field, often even creating the state of the art. Many of them suffer from a “curse of knowledge”: they have difficulty seeing their inventions from the perspective of someone (a patent practitioner) not as skilled and well-read in AI as they are. We have had good success knowing just enough at first to ask the inventors pertinent questions, and then building our knowledge from there. Going through this process with the inventors also gives you an idea of the kinds of questions an examiner might ask, and permits you to provide the answers in the specification or claims.

Write Claims the Patent Office Will Consider “Patent Eligible”

As with any patent application, your claims must describe technology in a way the patent system will recognize as an invention. A 2004 attempt to patent the plot of a story was finally snuffed out in 2014, proving that there are limits to how far “invention” will reach. In the vocabulary of the U.S. Supreme Court’s controlling decision in Alice v. CLS Bank, a “patent-eligible” claim to an invention must not be “directed to an abstract idea,” or must include an “inventive concept” that goes above and beyond an “abstract idea.” We think of it this way: a claim to an AI invention must at least have enough meat on its bones to look like the solution to a problem, rather than like wishful thinking. Otherwise, a holding that the claim is “directed to an abstract idea” may be very difficult to overcome.

Unfortunately, Alice left the very definition of “abstract idea” fairly abstract. It is much easier to describe what is not patent eligible under Alice than what is. Some rules of thumb: A claim that performs a preexisting business practice using a computer will likely be ineligible. A claim that can be performed in the human mind will be ineligible. And—most significantly, in our experience—a claim that recites an algorithm that can be implemented on a regular PC will often be ineligible. That last carve-out, unfortunately, is a pit into which a great many AI inventions may fall.

There is some hope, though. The 2016 Federal Circuit decision in Enfish v. Microsoft established that “[s]oftware can make non-abstract improvements to computer technology just as hardware improvements can.” The mention of an algorithm in a claim no longer necessarily implies abstractness, as it seemingly did in the first couple of years after Alice. Therefore, when you draft claims, recite operations that cannot reasonably be done mentally. Give the claims technical features that improve something other than people’s lives. Make the claims look as technological as possible. And, to take best advantage of Enfish, include claim features that improve “computer technology,” such as the operation of the computers running the AI system.

For example, a claim to a new way of training an AI system may reduce the amount of memory required while determining the model. That savings of memory is arguably an improvement in computer technology, because it is an improvement directly within a computer system. Therefore, such a new way of training may be different from an ineligible “‘abstract idea’ for which computers are invoked merely as a tool.” Similarly, in systems that perform training across multiple computers in parallel, a new way of training may reduce the amount of data that has to be exchanged between those computers. This reduction can be considered an improvement in computer technology, and thus the claim may be patent eligible under Enfish.

The Federal Circuit decision in McRO v. Bandai Namco provides another path to patent eligibility: claim “a specific means or method that improves [a] technology” instead of “a result or effect that itself is the abstract idea and merely invoke[s] generic processes and machinery.” Every trained model, when actually used for its intended purpose, involves technical details that can satisfy this requirement. Suppose you are training a model to drive a car. The trained model will require specific inputs—perhaps GPS, a particular configuration of laser or sonar sensors, or multiple accelerometers. That specific input configuration will distinguish your trained model from other models trained to perform similar tasks. Including some details of that input configuration permits you to argue that you are indeed claiming a patent-eligible “specific means or method.”

Finally, as a backstop, give each independent claim at least one dependent claim that recites all the technical detail you can think of. In our experience, it is easier to overcome an Alice rejection of some of the claims than of all of the claims. Including one highly detailed, highly technical claim will improve your odds when arguing for patent eligibility of at least that claim.

Stay Current as the Law of Patent Eligibility Develops

Enfish and McRO are not the only decisions to rewrite many of the rules. In fact, the law of patent eligibility has been changing continually since Alice. How do you keep up with the changes? Well, when the courts significantly change patent law, the U.S. Patent and Trademark Office (USPTO) responds with memos to the examining corps. The memos are publicly available. They are not a substitute for the court decisions themselves, but are still useful aids for those drafting and prosecuting applications for AI inventions. Take advantage of the memos to reduce your workload (just do not forget to read the decisions sooner or later!).

USPTO memos to the corps are good for more than just efficiency. We have spoken with numerous examiners who simply do not have enough hours in their day to read the court decisions affecting patent eligibility. However, examiners are generally trained on the contents of the memos. Therefore, you are more likely to succeed with the examiner if your claims meet the requirements of the law as that law is expressed in the memos.

The memos also sometimes cover ground the decisions themselves have not. A recent patent eligibility case, Berkheimer v. HP, related to a motion for summary judgment. There is no such proceeding before the USPTO. However, Berkheimer raised the standard for showing that an element in a claim is generic or conventional, which is very relevant to patent prosecution under Alice. The corresponding USPTO memo applies Berkheimer to the situation of examiners and applicants. Therefore, arguing Berkheimer before the USPTO is much easier in view of the memo than based on the decision alone. Read the memos and make good use of them!

Patent Eligibility in Europe Follows Different Standards

The confusion of patent eligibility in the United States since Alice stands in stark contrast to the long-established European approach to patent eligibility. Under the European Patent Convention (EPC), software as such is unpatentable. However, Europe will permit claims to “computer-implemented inventions,” i.e., inventions that involve a software component together with something else. The “something else” has to provide a “further technical effect”—for example, it has to interact with the real world in some way. Almost any AI invention has such an interaction, or else why would it have been invented? For example, AIs that optimize shipping patterns to reduce fuel requirements, AIs that manage network transfers or compress data to reduce bandwidth usage, and AIs that read sensors or drive actuators can all provide a further technical effect.

For AI systems that directly control motors or sensors, the connection between the AI and the real-world advantages is straightforward. For AI systems that primarily analyze data, however, the connection may be more tenuous. Consider an AI model trained to “caption” images, i.e., to describe the contents of the image in a sentence or two. The model’s input is a human-provided image, and the model’s output is human-readable text. To find further technical effect, dig into how the caption is provided. Try to find technological advantages or real-world connections in the implementation details. Explaining the advantages that come from those details, and from other claim features and combinations of features, will give you arguments against rejections for lack of further technical effect (in Europe) or lack of patent eligibility (in the United States).

Europe has a 16-year head start on the United States in developing precedent to support its patent eligibility rules, and it shows. Take advantage of the wealth of information available online about further technical effect, and talk to European patent firms. Claim specific features that provide further technical effects, and you will be on the right track to European coverage. Plus, arguments for further technical effect in Europe may give you corresponding patent eligibility arguments in the United States.

Write Claims with the Potential Infringer in Mind

U.S. Infringement Law Is Complicated and Evolving

The U.S. Code generally divides infringement into two categories: direct infringement and indirect infringement. The former category is generally accepted as strict liability without concern for knowledge or scienter, while the latter category requires some level of knowledge or intent. Generally—though not always—infringing acts must take place in the United States to infringe a U.S. patent.

A direct infringer is a single entity that commits all the steps of the infringing act, e.g., makes, uses, offers to sell, or sells any patented invention, or imports an implementation of that invention into the United States. Therefore, if the claimed invention is a method claim comprising several steps, all those steps must be performed by a single entity.

An indirect infringer can be liable for inducement or contributory infringement. In the former case, whoever actively induces infringement of a patent is liable as an infringer. In the latter case, whoever offers to sell or sells a component of a claimed invention, or imports such a component into the United States, is liable as a contributory infringer. However, contributory infringement requires that the contributory infringer know that component to be especially made or adapted for use in an infringement, and not be a staple article of commerce suitable for substantial noninfringing use.

In either indirect infringement category, a single entity must still commit all the steps of the claimed invention, necessitating direct infringement to show indirect infringement. The indirect infringement statute allows patentees who have been harmed to go after the real culprits of the infringement. For example, a method claim to an AI model training technique may not be infringed until a model is trained using that technique. The law of indirect infringement permits the patent holder to assert such a claim against the party that produced the training software. This is true even if that party does not train models itself, in an attempt to divide infringement between itself and the end users of its software.

However, indirect infringement has its limitations. For example, if a component sold by an alleged contributory infringer is suitable for significantly noninfringing uses, those uses act to negate indirect infringement. To avoid simple division of responsibility in a direct infringement (thereby transforming direct to indirect infringement), the courts have long recognized that, absent an agency relationship between the actors or some equivalent, a party that does not commit all the acts of an infringement is not liable for direct infringement. This was once true even if the parties had arranged to “divide” their acts of infringing conduct for the specific purpose of avoiding infringement liability.

Faced with this loophole, the Federal Circuit expanded the divided infringement rule from a simple agency rule to a new rule of joint enterprise under divided infringement. Under the old simple agency rule, where a first entity directs or controls a second entity’s acts, the first entity is liable for direct infringement committed by both entities together (vicarious liability). Under the new rule, where two or more entities form a joint enterprise, all can be charged with the acts of the others, rendering each liable for the steps performed by the others as if each is a single actor. Each entity is considered the agent or servant of the others, so the act of any entity within the scope of the enterprise is charged vicariously against the rest.

Claim with These Complications in Mind

The oldest advice is still the best advice: write each claim so that it covers actions by only one party, without requiring any actions by any other party. For example, a claim reciting use of a trained model, without any requirement for a particular training process, is likely to be infringeable by only one party. The same is true for a claim to only a training process, without use of a trained model. (However, a claim to only a training process, without a practical use case in the claim language, may face an uphill struggle for patent eligibility.) Focusing on a single party avoids the entire question of indirect infringement, and simplifies your eventual infringement arguments.

With AI inventions, remember that there may be more players than you expect. For example, training data may come from numerous sources. If you ever used Google’s former directory assistance service, the sound of your voice became training data for a speech-recognition AI. Similarly, Google’s recent CAPTCHAs that ask you to label images are providing training data for an image-recognition AI. So even a claim step as simple as “receiving training data” may implicate multiple parties.

Finally, do not forget detectability. A claim does not mean much if you can never find out whether someone has infringed it. Companies often keep their training data sets to themselves, as part of the value of the company. To protect that data, the training process often happens inside a company, out of public view. Even deployment of a model does not necessarily make it detectable—for example, you may never see the internals of a model running on a cloud server.

Can an AI System Itself Infringe a Patent Claim?

Infringement of intellectual property by an AI machine presents a new challenge to the rubric of infringement. Several theories of infringement may be relied on to advance an infringement suit against an AI-generated intellectual property. Suppose company X places an AI machine in the field. The AI machine then proceeds to infringe a patented invention. In such a situation, it is more likely than not that company X is unaware of the particular act of the AI machine. Thus, the traditional indirect infringement (e.g., inducement under 35 U.S.C. § 271(b) and contributory infringement under 35 U.S.C. § 271(c)) is likely unavailable.

However, direct infringement under the divided rubric described above may still be an available route. For example, under the principal-agency relationship, company X may be viewed as the principal and the AI machine as its agent. To determine if a single entity directs or controls the acts of another, the Federal Circuit considers general principles of vicarious liability. However, an actor infringes vicariously by profiting from direct infringement if that actor has the right and ability to stop or limit the infringement. Thus, in order to prevail under such a theory, the principal (company X) must be in a position to stop the acts of the AI machine. It is possible to divide responsibilities so as to avoid this requirement. Under the joint enterprise rubric, the rule requires a fact-based inquiry grounded on the following elements: (1) an agreement, express or implied, among the members of the group; (2) a common purpose to be carried out by the group; (3) a community of pecuniary interest in that purpose, among the members; and (4) an equal right to a voice in the direction of the enterprise, which gives an equal right of control. These requirements are more difficult to show than the principal-agency aspect of divided infringement. As such, the authors of this article assert that yet a new rule for divided infringement will be needed to address AI-based infringement.

Write a Patent Application That Describes a Technological Development

Because AI inventions have so many moving parts, expect to cover more ground in an AI application than you might in a non-AI application. Include at least enough detail to be able to explain the problem and the solution to an examiner, and that as much as possible solely by reference to the contents of the application. Give the context of the invention, both problem and solution. Explain (even if briefly) the difficulty the AI invention will overcome, how the model will be trained, and how the trained model will be used to overcome that difficulty. This material will help support arguments for patent eligibility, and will give you context to recommend to the examiner before an interview. We suggest you provide all of this in the detailed description section, rather than the background, to avoid any of the context becoming prior art.

Similarly, describe all the pieces of the end-to-end system in which the model will be used. No, you probably do not have to describe the innards of each individual sensor that will feed data to the model. But if you want to claim the whole system, you should at least describe how to put the pieces together.

For European coverage, it is not enough merely to describe the claimed invention itself. Also describe the further technical effects and which features provide them. Unlike in the United States, in Europe you generally cannot mix and match different portions of the application. If you do not clearly associate features of your claims with the further technical effects they provide, you may not be able to overcome a rejection for lack of further technical effect.

Writing a single application that will support both U.S. and European claims is not always easy. For example, unlike European practice, U.S. practice frowns on listing specific advantages of claimed inventions. However, we believe you can reasonably say in your application that “using feature X can provide benefit Y.” We often include a paragraph of such statements at the end of each figure’s discussion, or mention the benefits throughout with respect to individual features. Do yourself a favor, though, and never mention the word “invention” anywhere in the application (or during prosecution). Patent law may be about inventions, but that does not mean that anyone should characterize your invention—not even you!

A patent application must also describe the manner and process of making and using the invention. You satisfy this requirement differently depending on the nature of the invention. However, almost any big-data AI application will involve a training process and a model. Make sure you explain how the training process works in enough detail that someone could make and use the training process, even if you are not planning to claim the training process itself. Training is arguably an inseparable part of “making” a model-driven invention, so it should not be omitted.

Formalities for Overseas Prosecution

European and Japanese practice generally require a block diagram of the computer system implementing the AI. Therefore, include such a diagram in every AI application you file. In addition, our understanding is that European applications benefit from showing a breakdown of the software into functional, interconnected blocks. Rather than merely showing a “training” block in the diagram, show the different stages of training and how they interrelate. For example, a “feature extraction” block could feed data to a “mathematical optimization” block. The same can be done for use of the model: a “sensor-reading” block can feed the “feature extraction” block.

Remember, too, that AI systems are often implemented using field-programmable gate arrays (FPGAs), which are specialized reconfigurable hardware. Therefore, do not merely describe the blocks in your block diagram in terms of software; also describe FPGA or hardware implementations. Even software implementations increasingly run on graphics processing units (GPUs) rather than conventional microprocessors, so include GPU embodiments as well.

Finally, to support European prosecution, we often add a section of “example clauses” to the end of the specification. These clauses are a copy of the claims, converted to text paragraphs and with full multiple dependencies added. This permits you to claim more combinations in Europe than you would otherwise be able to. For example, in a European case with claims 2 and 3 both dependent directly from claim 1, the “intermediate generalization” standard may prevent you from claiming a combination of claims 1, 2, and 3. Including example clauses expressly reciting those combinations (in this example, claim 3 dependent from “claim 1 or claim 2” in the clauses) supports claiming those combinations if necessary during prosecution. The example clauses can also include discussing supporting further technical effect arguments.

Conclusion

Big-data AI grows in capability and reach every day. After the dark days of Alice earlier this decade, the outlook for patents on AI systems is bright. So pay attention to the details during disclosure meetings. Draft claims with an understanding of the AI ecosystem. Craft specifications that shine a spotlight on an invention’s merits. Do these, and your patent applications will effectively protect your clients’ AI inventions.

Published in Landslide Vol. 11 No. 3, ©2019 by the American Bar Association. Reproduced with permission. All rights reserved. This information or any portion thereof may not be copied or disseminated in any form or by any means or stored in an electronic database or retrieval system without the express written consent of the American Bar Association.

    Authors