Harvey promises to tackle Big Law's 'number one' AI fear with new partnership

Harvey has partnered with software provider Intapp to bring its ethical wall and information governance capabilities to Harvey’s AI platform.
The companies say the move is designed to give firms and lawyers confidence to securely deploy generative AI at scale.
Harvey has partnered with Intapp, the US-based software provider, to embed information barriers directly into its AI platform, responding to what Harvey chief executive Winston Weinberg describes as the “number one” concern of law firms - security.
Ben Harrison, president of industries at Intapp, said: “It’s very different than just the concept of one-time permissions. This is a living, breathing compliance organisation that helps firms run and scale over time, continuously monitoring, updating and securing.”
“In the beginning, people were asking whether anyone would even use AI,” Weinberg said. “Then it crossed a threshold where everyone is going to adopt this and security became the number one issue.”
The challenge
As firms expand across geographies and practice areas, overseeing internal confidentiality controls has become more challenging. Meanwhile, AI has evolved much faster than the governance systems designed to contain risk.
An AI system that can query a firm’s entire document management system risks surfacing or drawing on confidential information from other client matters unless its access controls mirror the firm’s information barriers exactly.
High stakes
The stakes used to be lower as governance failures were typically one-off and more easily manageable.
“What used to be a one-off mistake - an email sent to the wrong person - was not a big deal,” Harrison said. “It’s not a one-off situation like it used to be.”
“When an AI tool aggregates and surfaces information across an entire firm's data ecosystem in milliseconds, a single gap in permissions doesn't produce one mistake - it produces a pattern of exposure that no one can even see happening in real time,” said Harrison.
The most common exposures arise from legacy security gaps, unintended information access, informal team additions and permissions that are out of date.
“In a world where Harvey is querying across all of that data, this becomes high-severity exposure and the firm may not even know until a client or a regulator finds it first,” said Harrison.
Speed versus trust
Most law firms are eager to deploy AI quickly, but not at the expense of security.
“Speed is how firms win,” said Sebastian Hartmann, vice president of alliances and partner ecosystem at Intapp. “But you can’t have speed break trust.”
As AI systems become capable of handling end-to-end workflows, the risk profile changes.
“We’re getting to the point where these systems can do end-to-end work,” Weinberg said. “If they can do that, they have to operate under the same rules and permissions as a human lawyer.”
“If firms don’t have proper security in place, they’re going to have data breaches. Or worse, they could be working on two private equity deals and one leaks into another - and that’s the end of gen AI for ten years,” said Weinberg.
A natural extension
Under the partnership, existing Intapp policies will sync with Harvey’s access and sharing controls across the platform. Intapp’s client roster includes 96 of the Am Law 100 and the partnership is, in many ways, a natural extension of the infrastructure law firms already rely on.
“We spent 25 years building the business-of-law infrastructure,” Harrison said. “Harvey is building the practice-of-law AI. It’s a great marriage. The reason we're together today is the clients have been asking: ‘can you help us with this?’”
“Once the information barrier is set up and ethical walls are created, Intapp constantly monitors and follows matters and the people around them into perpetuity and updates in real time,” said Harrison.
Sovereign-level security
Part of the urgency stems from the nature of the information law firms handle.
“I can’t think of another industry with more market-moving data than law firms,” Weinberg said. “Imagine a large merger leaking - that can move, or even crash markets.”
The sensitivity of this information shapes Harvey’s broader approach to security.
“We work with top firms, large banks and private equity firms. We treat our security like we could be subject to sovereign attacks,” Weinberg said. “That’s the level we’re operating at.”
Friction kills adoption
Both sides of the partnership say security and compliance must be seamless.
“This has to be an auto lock,” Weinberg said. “Users shouldn’t have to think about it.” He added: “If you cause any workflow friction for a lawyer, they’re done with the product.”
The aim of the partnership is to make AI secure by default.
“We want to give the confidence that AI at scale is secure. The aim is to let thousands of lawyers run with it and feel really good about it,” said Harrison.
Weinberg said Harvey opted to partner rather than build these capabilities internally, preferring to move quickly and keep its engineers focused on its core competency of AI quality.
“The death of most really fast-growing startups is trying to do many things,” said Weinberg.
Join 10,000+ City law professionals who start their day with our newsletter.
The essential read for commercially aware lawyers.





