chevron-down Created with Sketch Beta.
April 01, 2024 Mind Your Business

Existing methods of accessing knowledge will coexist with generative AI

By Jan Van Hoecke
Far from fading into obscurity, old ways of accessing knowledge will continue to provide value and will coexist with generative AI.

Far from fading into obscurity, old ways of accessing knowledge will continue to provide value and will coexist with generative AI.

Stock photo.

As generative artificial intelligence stormed the scene over the past year with its snazzy Q&A interface, it was fair to ask whether existing ways of accessing knowledge, such as keyword search or templates, would become redundant.

After all, why would you need those old ways of tracking down specific pieces of knowledge if you can just ask the all-knowing AI to call it up for you?

The fact is, there are different entry points into knowledge, depending on what someone is trying to do. Far from fading into obscurity, old ways of accessing knowledge will continue to provide value and will coexist with generative AI.

Different approaches

Imagine a seasoned associate who needs to find the best starting point for drafting a bulletproof New York City commercial real estate lease. Nobody, even an experienced lawyer, likes to start with a blank piece of paper.

If they already know the specific template they want to use—perhaps even down to the document number assigned to that template—a generative AI interface might not add that much value or save them that much time. They know what they’re looking for, and they can quickly and easily call up that resource in the firm’s document management system. Using generative AI would be overkill in this scenario—a bit like asking generative AI to take you to the website for the IRS rather than just typing in IRS.gov.

Now imagine a very different scenario: A first-year associate is looking for the best resource for that same commercial real estate task, but they don’t even know where to start as far as what resources are available for them to draw upon. Moreover, they don’t know what they don’t know—i.e., they don’t know what specific knowledge assets are available that could be relevant to the task at hand.

A generative AI-powered Q&A interface could be hugely valuable here and a real time-saver—but some prep work needs to occur beforehand to make sure it is serving up the best results.

Start with good data

Clean data sets are essential to getting useful answers out of any generative AI interface, which means legal professionals first need to examine the information architecture within the law firm. What are the trusted data sets in the organization, and where does that data live?

If a firm has a document management system, this is an important first step—however, providing the large language models that underpin generative AI with access to the entirety of files stored within the system isn’t a good idea.

It is far better to give the large language model access to a limited subset of data from the document management system. For example, rather than giving it the access to all the draft versions of an important document, provide access only to the final approved versions.

A natural question at this point might be: Wouldn’t having access to drafts and learning how things were edited help train the AI better? The answer is actually no, for several reasons.

For starters, these question/answer systems are based on technology that does not train the AI. Instead, the AI is provided with an accurate knowledge base that it can consult to answer the question. In other words, when asked a question, the AI looks for the correct document and then uses that document to formulate the answer. It does not rely on its internal knowledge. From this perspective, the AI should not be provided with incorrect or outdated information.

Additionally, even if we would go about retraining the AI with the knowledge base, it is important to understand that its learning process is different from how we humans learn. AI models need to be shown the correct example. Their learning process does not include a reasoning step that would help them understand the difference between a bad example and a good example.

Beyond limiting access to final versions, organizations might also want to restrict how far back to pull data. Likewise, there might be some documents that are stale because the legal or regulatory landscape has significantly evolved in recent years due to new rulings or other developments. For this reason, firms might want to limit the training data to a specific time range rather than stretching back years and years.

Note that determining what good content looks like isn’t a one-off endeavor. The definition of good content should be dynamic, and for precisely this reason, there should be an internal knowledge curation team in charge of maintaining the data that is used to train the model. It is, after all, an important part of the intellectual property of the firm.

Maintaining tradition

Grounding generative AI’s responses in good content is what unlocks its ability to serve as a useful tool—and really, the approach is not that dissimilar from the prep work required for traditional forms of knowledge management, whether it’s keyword search or templates.

Neither of those traditional forms of knowledge management just magically happens on its own. Work product needs to be structured and tagged—e.g., “this is an example of a commercial real estate lease,” “this is an example of a share purchase agreement,” and so on—and the best examples need to be identified.

This identification can occur through manual curation efforts or through more automated processes; for instance, the template downloaded most frequently from the document management system by members of a certain practice group would be identified as the best template to use as a starting point for a specific use case. Either way, the goal is to ensure valuable knowledge is identified so that it can be properly leveraged.

While generative AI is certainly a new arrow in the search-and-knowledge quiver that’s capable of unlocking new capabilities, traditional approaches are still going to be relevant. To ensure lawyers can get access to the knowledge they need to do their best work and deliver the best outcomes, firms would do well to make sure they’re not choosing one or the other—this is a case where old and new alike have something useful to offer.

Mind Your Business is a series of columns written by lawyers, legal professionals and others within the legal industry. The purpose of these columns is to offer practical guidance for attorneys on how to run their practices, provide information about the latest trends in legal technology and how it can help lawyers work more efficiently, and strategies for building a thriving business.

Interested in contributing a column? Send a query to [email protected].

This column reflects the opinions of the author and not necessarily the views of the ABA Journal—or the American Bar Association.

Jan Van Hoecke

Vice President, AI services at iManage