Skip to main content
Benian TechnologiesBENIAN
Benian TechnologiesBENIAN
Home
About
Case Studies
Blog
Sign InGet Started
Benian Technologies

Products

  • Workflow Automation
  • Voice AI
  • AI Audit
  • AI Agents
  • Chat AI
  • Data Intelligence

Features

  • Outbound Engine
  • Content Publishing
  • Document Generation
  • RAG Chatbots
  • Voice Receptionist
  • Stack Integration

Use Cases

  • Lead Generation
  • Content Automation
  • Productize Expertise
  • Customer Support
  • Workflow Orchestration
  • After-Hours Coverage

Industries

  • Dental Practices
  • Home Services (HVAC, Plumbing)
  • Professional Services

ROI

  • Missed-call revenue
  • Data intelligence hours
  • Workflow automation

Company

  • About Us
  • Case Studies
  • Blog
  • FAQ
  • Book a Call

Legal

  • Privacy Policy
  • Terms of Service
© 2026 Benian Technologies. All rights reserved.
1
Back to Blog
Guides

A Law Firm’s AI Buyer’s Guide: Working Under ABA Formal Opinion 512

Emre Benian
Emre Benian · April 19, 2026 · 11 min read
TL;DR

A plain-English breakdown of ABA Formal Opinion 512 and what it actually requires from law firms using AI. Four rules, a nine-question vendor checklist, and what you can and can’t bill clients for.

Summary

In July 2024, the ABA’s Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, which explained how the Model Rules of Professional Conduct apply to lawyers using generative AI. In April 2026, it remains the operating constraint for every US firm using GenAI, and most state bars have built their own opinions on top of it.

If you’re evaluating an AI tool for intake, drafting, research, or client communication, the question isn’t “is this cool.” It’s “does this let me stay inside 512.” Here’s a plain-English breakdown of what the opinion requires, plus a nine-question buyer’s checklist for the next time a vendor gets on your Zoom.

What Opinion 512 Actually Says

Opinion 512 does not create new rules. It applies the existing Model Rules of Professional Conduct (competence, confidentiality, candor, fees, supervision) to generative AI. Four of those rules do most of the work: 1.1 (competence), 1.6 (confidentiality), 1.5 (fees), and 5.1/5.3 (supervision).

Everything else in the opinion is commentary on how those four apply. If you understand what those four rules require, you can evaluate any AI tool.

Rule 1.1: Competence

Competence now includes understanding the AI tools you use. You don’t have to be able to explain the math behind a transformer, but you do have to understand four things: what the tool does, what it can get wrong, what data it sees, and what data it trains on. If you can’t answer those four questions for a tool you’re using on a client matter, you are not competent to use it under Rule 1.1.

Practically: before rolling out an AI tool firm-wide, at least one partner or senior associate should spend enough time with it to identify its specific failure modes. “Hallucinations” in the abstract are a known risk; what you need is the list of hallucinations this specific tool is prone to on your specific kind of work.

Rule 1.6: Confidentiality

Confidentiality covers anything relating to a client representation, whether or not it’s privileged. Opinion 512 is explicit: you cannot enter client confidential information into a self-learning GenAI tool (one that trains on your inputs) without the client’s informed consent. “Informed consent” means the client actually understands what they’re agreeing to, which is hard to demonstrate if your engagement letter buries it in a paragraph about “various technologies we may use.”

Ask vendors whether the tool trains on your data. Get the answer in writing. Understand where data lives, who the subprocessors are, and what retention controls exist.

If data leaves your jurisdiction (and with most AI vendors, it does), understand the implications for your bar’s opinion. Some state bars have specifically addressed cross-border data flows; others haven’t yet. Default assumption: if your client would be surprised, you haven’t disclosed enough.

Rule 1.5: Fees

You can’t charge a client for your own learning curve. If you spend five hours learning how to use a new AI research tool, those five hours are not billable to the client. Opinion 512 makes this explicit.

A related pressure: many firms are rethinking hourly billing as AI compresses drafting and research time. If a tool cuts a four-hour task to forty minutes, you have three honest options: bill those forty minutes, move to a fixed fee for the output, or charge per deliverable regardless of time. Opinion 512 doesn’t mandate which. It does mandate that whatever you charge be reasonable and transparently disclosed.

You can bill clients for the AI tool’s cost itself if the cost is reasonable and disclosed. Think of it the way Westlaw access gets passed through to clients. The tool is an expense of the representation, not a profit center.

Rules 5.1 and 5.3: Supervision

Partners and supervisors are responsible for subordinates’ work, including work product that involves AI. A junior associate using an AI tool without you reviewing the output is still your responsibility. Opinion 512 is explicit: AI output must be reviewed by a lawyer before it is relied on.

If you’re buying an AI tool, you’re buying a supervision obligation. The review workflow has to be real, not a checkbox.

This is where most firms under-spec their deployments. They buy a drafting tool and forget to design the review step. Opinion 512 treats the review step as a substantive requirement, not a formality.

The 50-State Patchwork

The ABA’s opinions are persuasive but not binding. State bars set the actual rules. Since Opinion 512 was issued, many state bars have published their own GenAI opinions. Most reinforce 512 but add wrinkles.

California, New York, Florida, Illinois, and Texas all have active working groups or published guidance. North Carolina has written one of the clearer pieces on actually living with GenAI day-to-day. Some states require explicit disclosure of material AI use in the engagement letter; others are silent but likely to move in that direction.

Before you roll anything out, check your state bar’s current guidance. It changes faster than the rest of professional-responsibility case law typically does.

The Buyer’s Checklist

Before you sign with any AI vendor, demand the following nine answers in writing. Most vendors will hand-wave through one or two. Walk away from any vendor that hand-waves all nine.

  1. Does the tool train on my inputs? If yes, under what conditions can that be turned off, and is there a formal zero-retention mode?
  2. Where is my data stored, and for how long? Specific region, specific retention window.
  3. Who is the subprocessor chain? You’re responsible for their confidentiality practices under Rule 1.6 too.
  4. Is there a BAA or equivalent data-protection agreement? Even outside HIPAA contexts, the legal equivalent matters.
  5. What security certifications do you have? SOC 2 Type II is the current baseline; anything less is pre-production.
  6. Can you produce the data your tool has on my firm if I request it? Can you delete it on demand?
  7. What’s your hallucination rate on domain-specific legal queries, and how do you measure it? “We don’t hallucinate” is not an answer.
  8. Can the tool cite its sources? If it can’t, it isn’t ready for legal research.
  9. What happens to my data if you’re acquired or go under? Specifically, what does the data-export clause look like?

What This Means for Client Intake

The single highest-leverage AI deployment in a law firm today isn’t drafting or research. It’s intake. Clio’s Legal Trends Report found that firms fail to capture 64% of potential revenue from leads that never convert into clients. The gap isn’t quality of lawyering; it’s response time, after-hours coverage, and qualifying questions getting asked thoroughly before the potential client moves to the next firm on their Google results.

Conversational AI intake, done properly, captures three to five times more qualifying information per lead than a static web form. Under Opinion 512, it is deployable today if the vendor meets the checklist above.

The confidentiality bar is real. You’re collecting information from prospective clients who haven’t yet signed an engagement letter, and Rule 1.18 already covers prospective-client confidentiality. Configure the tool to stop short of legal advice, route urgency correctly, and be explicit with the caller about what the tool is and isn’t.

How We Build for Legal

When we deploy AI intake for a law firm at Benian, the stack is built around three non-negotiables.

1. The model never trains on firm inputs. Zero-retention mode or a dedicated endpoint, not the consumer tier of whatever vendor is cheapest that quarter.

2. Conversations are auditable. Every intake session produces a transcript and a structured summary for attorney review. The transcript is the record.

3. A human reviews before representation. The bot is explicitly not the lawyer. It’s the front-door conversation that gets the right person on a call with the right attorney, faster.

The stack under the hood: Vapi or Retell for voice, a dedicated model endpoint with no-training guarantees, routing logic in n8n, and a clean handoff to whichever attorney picks up the file. Cost typically lands in the $500–$1,500/month range for a 15-to-50-attorney firm, which is roughly the floor of what a full-time intake coordinator costs in most markets.

The Bottom Line

Opinion 512 isn’t a ban on AI in law practice. It’s a specification for how to use AI correctly. Firms that treat it as a compliance checklist (asking the four questions, demanding the nine answers, designing a real review workflow) deploy AI faster and safer than firms still waiting for “the rules to settle.”

If you want to talk through what this looks like for your firm specifically, book a 30-minute scoping call. We’ll walk through your current intake flow, identify where AI fits inside 512, and give you an honest scope and timeline.

Emre Benian, Founder of Benian Technologies

Emre Benian

Founder and CEO, Benian

LinkedIn

Emre built Benian from the ground up while studying Industrial Engineering at the University of Illinois at Urbana-Champaign. Self-taught in AI, automation, sales, and marketing, he made over 300 cold calls before landing his first client. He now builds AI systems for businesses across the US and Türkiye — focused on real ROI, not buzzwords.

Get Started

Related Articles

AI Strategy12 min read

Why Your Chatbot Isn’t Working (And What to Build Instead)

AI Strategy14 min read

From Chatbots to Agentic Process Automation: How Multi-Agent Orchestration Plugs Your Revenue Leaks

Guides12 min read

AI Receptionist vs Traditional Answering Service: Which Is Better for Your Business?

Ready to Put AI to Work?

Get an honest breakdown of what AI would look like in your business.

Get StartedAbout Us
Free ConsultationNo CommitmentCustom Roadmap