Who's Hiring
Large law firms are retaining AI systems. In April of this year, BakerHostetler hired ROSS for its bankruptcy group. In June and July, international firms DLA Piper and Clifford Chance respectively announced agreements with Kira Systems. In October, Womble Carlyle was one of the latest big firms to announce its hiring of ROSS for use in its bankruptcy practice. Latham & Watkins and Von Breisen & Roper have also hired ROSS.
Some believe it will soon be common place in law firms to conduct legal research and contract review and perform due diligence—usually the province of associates—using these artificial intelligence platforms. Such a prospect raises questions for lawyers, such as whether lawyers should be concerned that they will be replaced by robots.
Firm leaders seek to quell that fear. BakerHostetler chief information officer, Bob Craig, stated "ROSS is not a way to replace our attorneys—it is a supplemental tool to help them move faster, learn faster and continually improve." "We don't anticipate any cut in numbers, neither do we necessarily anticipate any cut in hours . . . [w]hat we do anticipate is more focus on perhaps the higher-level points," says Sally Wokes, a partner in the London-based international firm Slaughter and May, who worked on the development of Luminance.
"We can now spend the time that we would have spent actually finding the issue, really thinking about the issues, thinking about the recommendations to our client," according to Wokes. Rather than viewing these tools as a way to replace attorneys, firms are viewing this technology as a necessary step forward in the delivery of services to clients.
Ethics and the Professional Form
The use of AI raises ethical concerns. "In using technology, lawyers must understand the technology that they are using to assure themselves they are doing so in a way that complies with their ethical obligations—and that the advice the client receives is the result of the lawyer's independent judgment," says Wendy Chang, a member of the ABA's Standing Committee on Ethics and Professional Responsibility. There is an element of human error, and Chang cautioned against the pitfall of blind trust of technology's competence. "The ultimate danger is how competent it all looks. Technology, especially AI technology, can be deceptive because its inner workings are invisible to the naked eye. A user cannot see what is going on behind the scenes. One asks a question, and the answer appears," believes Chang. In the end, lawyers cannot ignore their ethical obligations or abdicate their duties of professional responsibility to technology.
AI has the potential to be a transformative technology in the practice of law. While computers can process information faster and recall more information accurately, human involvement is still required to interpret data, render legal advice and fulfill ethical obligations. As a technological advancement, AI should prove to be a boon to the business of law, enabling attorneys to improve on the delivery of services to clients—and not the eradication of our kind.