When attorneys hear "AI agent handling client communications," their first instinct is often a reasonable one: wait — is this even allowed? The concern isn't unfounded. Law is one of the most heavily regulated professions in the country, and the duty to protect client confidences is foundational.

But the concern is often based on a misreading of what ethics rules actually require. The ABA and most state bars have now addressed AI and cloud technology directly — and the conclusion is not "don't use it." It's "use it thoughtfully."

Here's a plain-language walkthrough of what the rules say and what they actually require of you.

What ABA Model Rule 1.6 Actually Says

Model Rule 1.6 governs confidentiality of information. Subsection (c) is the relevant provision for technology decisions:

ABA Model Rule 1.6(c)
"A lawyer shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client."
American Bar Association Model Rules of Professional Conduct

Two words do a lot of work here: "reasonable efforts." Not "perfect" efforts. Not "guaranteed" protection. The standard is reasonableness — and what constitutes reasonable is informed by context, including the sensitivity of the information, the likely risks, and the cost of implementing protective measures.

This matters because some attorneys read Rule 1.6 as an absolute prohibition on sharing client data with any third-party system. That's not what the rule says. It says you must take reasonable steps to protect that data. The question is whether your AI vendor clears that bar.

ABA Formal Opinion 477R: The Cloud Guidance That Changed Everything

In 2017, the ABA issued Formal Opinion 477R specifically addressing the use of cloud services and internet communications. This opinion replaced an earlier opinion and significantly clarified the analysis.

The key findings:

Key takeaway from ABA 477R: The analysis isn't "is this cloud-based?" but rather "does this service employ reasonable security measures appropriate to the sensitivity of the information?" If yes, you're on solid ground.

This opinion effectively settled the debate about whether lawyers could use cloud tools at all. They can. The compliance question is about the quality of the security, not the category of technology.

What "Reasonable Measures" Looks Like in Practice

The ABA has outlined factors relevant to determining whether a lawyer's technology use meets the reasonableness standard:

For routine intake communications — the kind AI agents typically handle — the sensitivity is moderate. You're capturing name, contact info, and a general description of the matter. You're not transmitting sealed court documents or privileged legal strategy.

In that context, what does "reasonable" require? At minimum: encrypted transmission, access controls, a reputable vendor with documented security practices, and the ability to delete client data if needed.

Where Shared Cloud Platforms Get Complicated

General-purpose AI tools like the consumer version of ChatGPT present real ethics questions. When you paste client information into a shared model, that data may be used to train future versions of the model. That's not hypothetical — it's the default behavior of many consumer AI products.

That's a problem. And it's where the cautious attorney's instinct is correct: you should not use consumer AI products with client data without understanding the data retention policies.

But this concern doesn't apply to purpose-built legal AI systems that run on isolated infrastructure and have explicit data handling commitments.

Why Private Dedicated Servers Exceed the Standard

There's a meaningful difference between running AI on shared public cloud infrastructure and running it on dedicated private servers. The latter eliminates the most common concerns:

How dedicated server deployments address the ethics analysis
No data co-mingling — client data is isolated on hardware dedicated to your firm, not shared with other organizations
No training on your data — client conversations are not used to improve shared models or made available to other users
Defined retention and deletion — you control how long data is stored and can request deletion of any record
Encrypted at rest and in transit — data protection that matches or exceeds what most law firms already use for email
Auditable access logs — you can document who accessed client data and when, satisfying supervision requirements

Running AI on dedicated infrastructure doesn't just meet the "reasonable measures" standard — it often exceeds it. Many attorneys already share client information with third-party vendors (billing software, case management platforms, cloud storage) without a second thought. A purpose-built AI system with dedicated infrastructure and explicit data controls is a more restrictive arrangement than most of those.

What About State Bar Opinions?

While the ABA sets model rules, each state bar can interpret and adopt those rules differently. That said, the trend in state bar guidance has followed the ABA's direction: cloud and AI tools are permissible with appropriate safeguards.

A non-exhaustive sampling:

Bottom line: No state bar has prohibited AI tools categorically. The consistent message across jurisdictions is: do your due diligence on the vendor's security, use tools with appropriate data protections, and don't paste client information into consumer-grade shared tools without understanding how that data is handled.

The Competence Angle: You May Have an Obligation to Understand This

Comment 8 to Model Rule 1.1 on competence requires lawyers to keep abreast of changes in the law and "the benefits and risks associated with relevant technology." As AI becomes a standard part of legal practice, the question may shift from "can I use this?" to "should I understand enough about AI to make a reasoned decision?"

The bar associations are not asking you to become a technologist. They're asking you to exercise informed judgment — to understand enough about the tools you use (or choose not to use) to make decisions that protect your clients.

That's the same standard you apply to any other vendor relationship in your practice. And it's a standard most careful attorneys are well-equipped to meet.

The Short Answer

Using AI in your law practice is ethically permissible under ABA guidance and the interpretation of most state bars — provided the tools you use have genuine security controls, not shared consumer infrastructure. The standard is reasonable measures, not perfection. Private dedicated servers exceed that standard. Consumer-grade shared tools often don't meet it.

The more interesting question, increasingly, is whether failing to adopt efficient AI tools when they're available, vetted, and ethically sound represents its own form of risk — to your clients, your capacity, and your ability to serve them well.