Handvantage

Legal services and agentic AI: privilege, competence, and the supervision rule.

Three professional-conduct rules that change when AI processes privileged communications, what ABA Formal Opinion 512 and the Canadian law societies actually require, and the line between AI as a research tool and AI as the practice of law.

Feature image for "Legal services and agentic AI: privilege, competence, and the supervision rule."

Legal services is the sector where agentic AI procurement has the smallest margin for error, because the same conduct rules that make a lawyer’s work valuable — confidentiality, competence, supervision, the duty to communicate honestly with clients — apply unmodified to AI tools that participate in the work. A misjudged AI deployment in a law firm is not a productivity issue; it is a discipline-committee issue, a malpractice-insurance issue, and a reputational issue all at once. Three lawyers were sanctioned in the Avianca case for ChatGPT-fabricated citations in 2023, and the discipline cases have continued through 2024 and 2025 in growing volume. This dossier walks through the three conduct rules that bear most directly on agentic AI use in legal practice, the specific guidance the ABA and the Canadian law societies have issued, and the line between AI as a research tool and AI as the practice of law.

Before the regulatory walk: a critical scope clarification, parallel to the healthcare and fintech dossiers. Vantage Workspace is a productivity platform that supports lawyers in their work. It is not a substitute for a lawyer. It does not produce legal advice that a client receives directly. It does not draft and file documents that go to court without lawyer review. It does not engage in the practice of law, which in every jurisdiction is reserved for licensed members of the bar. The platform fits the workflow of a lawyer or a legal professional doing their work: drafting, research, summarisation, scheduling, document management, internal coordination. The platform does not fit the unauthorised-practice-of-law concern that has driven discipline against several AI-assisted “legal service” companies, because we don’t deploy in that category. We will tell a legal-services buyer this directly in the first conversation if their use case veers into UPL territory.

The first conduct rule is confidentiality. The American Bar Association’s Model Rule 1.6 and the parallel rules in every state of the US (and the equivalent provisions in the Federation of Law Societies of Canada Model Code, adopted with provincial variations) impose a duty on lawyers to preserve the confidentiality of information relating to the representation of a client. The rule extends beyond solicitor-client privilege to all information about the representation, regardless of whether the information is protected by privilege rules in court.

When an agentic AI tool processes confidential information, the question becomes: where does that information go, and who can see it? Most consumer-grade AI assistants and most enterprise SaaS AI products process data on the vendor’s infrastructure, with terms of service that permit some level of telemetry, training-data use, or vendor employee access for support purposes. None of those terms are necessarily incompatible with Rule 1.6, but they require analysis: the lawyer needs to understand where the data goes, what the vendor’s contractual restrictions are, whether informed client consent is required, and whether the lawyer’s obligation to take reasonable precautions to prevent inadvertent disclosure (Comment [18] to Rule 1.6 in the US, similar comments in Canadian rules) is satisfied by the vendor’s security posture.

The Law Society of Ontario’s 2024 guidance on AI use in legal practice, the Canadian Bar Association’s 2024 practice resource, and the Federation of Law Societies of Canada’s 2025 model code amendments collectively make the operative principle explicit: the lawyer remains responsible for the confidentiality of information processed by AI tools the lawyer chooses to use. Vendor terms that permit any use of client data outside the immediate processing of the lawyer’s request — for model training, for product improvement, for vendor analytics — are likely to fail Rule 1.6 unless the client has been informed and has given informed consent. Most law firms have responded by procuring AI tools that contractually prohibit such uses; some have deployed AI tools on infrastructure they control, where the question doesn’t arise.

The second conduct rule is competence. ABA Model Rule 1.1 (and the parallel competence rules in Canadian provincial codes) requires lawyers to provide competent representation, including the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation. Comment [8] to Rule 1.1, added in 2012 and now adopted in nearly every US jurisdiction, expressly extends competence to include keeping abreast of changes in the law and its practice — “including the benefits and risks associated with relevant technology.”

Applied to agentic AI, the competence rule means a lawyer who chooses to use an AI tool has an affirmative duty to understand what the tool does, where its limitations lie, and what risks attach to its use. The ABA Formal Opinion 512, issued in July 2024, made this explicit for generative AI specifically: lawyers must “reasonably understand” the AI tools they use, including whether the tool may produce false or misleading outputs (the “hallucination” problem), whether the tool retains or transmits client information, and whether the tool’s use is consistent with the lawyer’s other professional obligations. A lawyer who delegates work to an AI tool without this understanding is at risk of a competence violation regardless of whether the AI’s output is correct.

The third conduct rule is supervision. ABA Model Rule 5.1 governs the supervision of subordinate lawyers; Rule 5.3 governs the supervision of non-lawyer assistants. The 2024 amendments to the comments on Rule 5.3 (and parallel guidance in Canadian provinces) clarified that AI tools used in legal practice are governed by the supervision framework that applies to non-lawyer assistants. The lawyer is responsible for the AI’s work as if the work had been done by a paralegal or assistant under the lawyer’s supervision.

The supervisory framework in practice means: the lawyer is responsible for verifying the AI’s outputs before they are used. An AI that drafts a research memo with fabricated case citations has, under the supervision rule, produced work the supervising lawyer had a duty to verify. The Avianca decision (Mata v. Avianca Inc., S.D.N.Y. 2023) and the discipline cases that followed it each turned on this point: the lawyers had not verified the AI’s output before submitting it to the court. The conduct rules do not prohibit AI assistance — they require that the lawyer’s supervision actually happen, with documentation that demonstrates it happened, before the work product leaves the firm.

The privilege dimension is distinct from confidentiality and worth naming separately. Solicitor-client privilege (in Canadian law) and the attorney-client privilege (in US law) protect communications between a lawyer and client made for the purpose of obtaining legal advice, from compelled disclosure in court. The privilege belongs to the client, not the lawyer. The question for AI use is whether processing privileged communications through a third-party AI tool waives the privilege. The current consensus across US and Canadian jurisdictions is that processing through a vendor’s AI tool does NOT automatically waive privilege provided the vendor is under a confidentiality obligation equivalent to the lawyer’s, and provided the vendor’s access is limited to what is necessary to perform the contracted service. This is the legal basis for the legal-services AI vendor industry that has emerged since 2023.

However, the privilege analysis becomes more complicated when (a) the AI vendor uses the data for purposes beyond the immediate processing (training, analytics, etc.), (b) the AI vendor’s subcontractors or sub-processors have access, or (c) the AI vendor’s jurisdiction of operation creates exposure to compelled disclosure under foreign law (a particular concern with US-based vendors processing Canadian-client data in a context where US discovery orders might reach the vendor). Each of these is a real risk that requires fact-specific analysis. The Canadian Bar Association’s 2024 practice resource specifically flags the cross-border data processing concern as a privilege risk that lawyers should evaluate before deploying AI tools.

ABA Formal Opinion 512 (July 2024) is the most significant recent guidance and consolidates the analysis across confidentiality, competence, supervision, and the duty of communication with clients. The opinion specifically addresses generative AI use and concludes: lawyers may use AI tools, must understand them, must take reasonable precautions to protect client information, must verify the AI’s outputs, must obtain client consent in some circumstances, and must consider whether to disclose AI use in the lawyer’s billing and communication with the client. The opinion does not prohibit AI; it requires that AI use be conducted within the existing professional-conduct framework. Subsequent state ethics opinions in California, New York, Florida, Illinois, and several other jurisdictions have largely tracked Opinion 512’s approach.

The platform’s posture for legal services. Vantage Workspace fits the workflow of lawyers and legal professionals doing their work, with a deployment model that addresses each of the four professional-conduct dimensions named above. On confidentiality (Rule 1.6): the platform runs on the firm’s infrastructure, the firm controls the data plane, no telemetry phones home to the vendor, and no data is used for model training. On competence (Rule 1.1): the platform’s outputs are accompanied by sufficient transparency for the lawyer to verify (model version, prompt, retrieval sources where applicable). On supervision (Rule 5.3): the platform’s audit log records the lawyer’s review action — when the lawyer reviewed the AI’s output, what changes the lawyer made, when the lawyer signed off — providing the documentation the supervising lawyer needs. On privilege: the platform’s deployment on firm infrastructure means no third-party AI vendor accesses client data in the way that would create privilege risk; sub-processor lists are minimal and disclosable.

What the platform does NOT do for legal services: it does not provide legal advice. It does not file documents on a client’s behalf. It does not engage with clients without lawyer involvement. It does not produce work product that a court receives without lawyer review and signature. It does not adjudicate matters. Each of these would constitute the practice of law, which is reserved to licensed lawyers; the platform is designed to support lawyers, not to replace them.

What the firm still owns. The conduct rules apply to the lawyer, not to the platform vendor. The lawyer’s competence in the AI tools the lawyer uses is the lawyer’s; the platform makes verification possible but does not perform the verification. The lawyer’s supervision of the work product is the lawyer’s, with the platform providing the audit infrastructure that demonstrates the supervision happened. The client communication and consent, where required, are the lawyer’s; the platform does not communicate with clients on the lawyer’s behalf without the lawyer initiating the communication. The retention policies, including for AI-assisted work product, are the firm’s; we provide the framework, the firm’s policy office sets the rules.

Three deployment patterns we see in legal-services procurement, ordered by professional-conduct exposure. The first is internal-firm-operations: the platform supports internal firm work — knowledge management, internal communications, conflicts checking, scheduling, billing administration — where client information is incidentally present but the work is not the practice of law. Confidentiality (Rule 1.6) applies in the standard way; competence and supervision rules apply at the level of the lawyer using the tool; privilege is not implicated because the work is not legal practice. This is the most common deployment shape in mid-to-large firms.

The second pattern is lawyer-augmenting-on-client-matters: the platform helps the lawyer with client work — research, drafting, document review, summarisation of case materials — where the lawyer’s judgment and supervision are central. Confidentiality applies in full; competence requires the lawyer to understand the tool; supervision requires the lawyer to verify outputs; privilege is preserved through the on-firm-infrastructure deployment. Most legal-services deployments of agentic productivity platforms ultimately land here.

The third pattern is client-facing-AI or AI-as-the-practice-of-law. This is where Vantage Workspace stops being the right product. Firms or organisations that want AI to engage with clients directly, provide legal advice, or substitute for lawyer judgment are operating in territory where unauthorised-practice-of-law concerns apply, and the platforms in this category (LegalZoom’s automated-services arm, DoNotPay until its 2024 FTC settlement, certain experimental client-facing tools) operate under separate regulatory pressure. We do not deploy in this category and will tell a buyer so directly.

Five questions a legal-services buyer should ask any agentic AI vendor before procurement. First: “Where is client data processed, and what are the contractual restrictions on the vendor’s use of that data? Specifically, can you confirm in writing that no client data is used for model training, no employee of the vendor accesses client data outside the immediate processing of the firm’s requests, and no sub-processor outside a defined list has access?” Vendors who hesitate at this question are not ready for legal-services deployment.

Second: “For ABA Model Rule 1.1 competence, what materials do you provide to help our lawyers reasonably understand your AI tool? Specifically, what model versions are in use, what are the documented limitations, and how do you communicate updates to model behaviour?” The right answer involves specific documentation; the wrong answer is a marketing claim about reliability without verifiable technical artefacts.

Third: “For Rule 5.3 supervision, what does the audit log look like for a lawyer’s review of an AI output? Walk me through how a discipline-committee investigation would reconstruct, six months after the fact, that a specific lawyer verified a specific AI output before it was used.” This is the question that separates platforms that produce supervision-grade audit trails from platforms that produce general application logs.

Fourth: “What is your sub-processor list, where do they operate, and what is your exposure to compelled disclosure under foreign law? Specifically, for Canadian-domiciled firms processing data on US-based infrastructure, what is your position on Section 702 FISA orders or other US discovery mechanisms that might reach our client data?” This question matters particularly for firms with cross-border practices.

Fifth: “Have you been the subject of any discipline-committee investigation, malpractice-insurance claim, or court sanction related to AI use in legal practice? What changes did you make as a result?” Vendors who have been through these conversations have done the work; vendors who haven’t may be honest in their answer or may not realise the question is reasonable.

The 2026 legal-services AI procurement environment is being shaped by three pressures: the maturation of state and provincial ethics opinions following ABA Formal Opinion 512, the growing volume of discipline-committee cases involving AI hallucination and inadequate supervision, and the increasing sophistication of malpractice insurance carriers in pricing AI-related risk. Firms that will deploy agentic AI successfully in this environment are the ones that match the right tool to the right use case, with the lawyer’s supervision and competence rules respected at every step. Firms that will see discipline cases are the ones whose lawyers treated AI as a productivity miracle that didn’t require the same verification their other work-product practices demand.

Vantage Workspace fits a specific portion of the legal-services AI workload — internal firm operations and lawyer-augmenting work where the lawyer’s judgment remains central. The portion is large enough that most law firms could deploy across most of their internal AI use cases. The portion the platform does not fit (client-facing legal-advice automation, document filing without lawyer review) is a different product category, with its own regulatory pressure and its own discipline risks. A legal-services buyer who knows the difference can make procurement decisions that survive ethics review, malpractice underwriting, and the discipline-committee scrutiny that will continue to define the AI-in-legal-practice conversation through 2026 and beyond.



CONTINUE THE CONVERSATION

If something here is what you're working on, talk to us.

Articles like this one come out of conversations with practitioners, security leaders, and engineering teams in regulated industries. If the writing reflects your situation, the next conversation is probably worth having.

Continue the conversation →

Or write to hello@handvantage.com directly.