Legal & AI
Legal & AI encompasses two intersecting domains: AI tools transforming legal practice, and the legal frameworks being constructed to govern AI itself. Both are evolving rapidly, and each shapes the other in ways that will define how AI integrates into society.
AI in legal practice is already well past the experimental phase. Contract analysis, due diligence, and legal research—tasks that consumed thousands of billable hours—are increasingly handled by LLMs and specialized legal AI platforms. Harvey AI, CoCounsel (by Thomson Reuters), and similar tools can review contracts, identify risk clauses, summarize case law, and draft legal documents in minutes rather than days. The economics are stark: a task that previously required a team of junior associates billing at $400/hour can now be completed by an AI system for a fraction of the cost.
But the legal profession's encounter with AI has also produced cautionary tales. In 2023, a New York attorney submitted a brief containing fabricated case citations generated by ChatGPT—hallucinated cases that didn't exist. The resulting sanctions highlighted a fundamental tension: AI tools can dramatically accelerate legal work, but lawyers remain professionally and ethically responsible for the accuracy of everything they submit. Courts have since begun requiring disclosure of AI usage in legal filings, and bar associations are developing guidelines for responsible AI use in practice.
AI-related litigation is itself becoming a major legal category. Copyright lawsuits over training data (The New York Times v. OpenAI, Getty Images v. Stability AI), disputes over AI-generated content ownership, and employment discrimination claims related to algorithmic hiring decisions are establishing precedent that will shape AI development for decades. The core legal questions—who owns AI-generated output, what constitutes fair use of training data, how to assign liability when an AI causes harm—remain largely unsettled.
The regulatory landscape is fragmenting along geographic lines. The EU's AI Act establishes a risk-based framework with strict requirements for high-risk applications, mandatory transparency, and significant penalties. The US has taken a more sector-specific, lighter-touch approach, though executive orders and agency guidance are creating a patchwork of requirements. China has implemented some of the world's most specific AI regulations, including rules governing recommendation algorithms, deepfakes, and generative AI services. For companies building AI products, navigating this regulatory patchwork is itself becoming a significant operational challenge.
The intersection of AI governance and legal practice points toward a future where AI literacy becomes essential for legal professionals, and legal expertise becomes essential for AI developers. The companies and practitioners who understand both domains will have significant advantages in the emerging landscape.
Further Reading
- The State of AI Agents in 2026 — Jon Radoff