Are lawyers naturally primed to work and build with AI? Our guest this week certainly thinks so, and he proved it by beating out 13,000 people, including hundreds of engineers from top companies, to win Anthropic's hackathon.
Intrigued? Let's get into it.
In this episode, I speak with Mike Brown, a personal injury lawyer and founder of Crossbeam, an AI-powered permit review application for California cities.
Before law school, Mike was a VFX artist and reality TV producer. He started building AI tools out of necessity when his first legal tech startup ran out of money, and he couldn't afford a developer.
That path led him to Cursor, Claude Code, and eventually a hackathon he entered on a whim. He won $50,000 in Anthropic credits against 13,000 applicants. He still cannot read a single line of code. This conversation is full of practical approaches to building with AI, including how Mike actually built Crossbeam, the planning techniques he swears by, and why he genuinely believes lawyers are better positioned for this moment than most people in the room.
Listen Here:
Or, watch here:
In this episode, we discuss:
- Why lawyers are better positioned for AI than most engineers realize, and what law school actually trained you for without telling you
- How Mike beat 13,000 applicants by solving a problem engineers assumed couldn't be solved yet
- The adversarial prompting technique that catches plan failures before they cost you weeks
- Why building at the frontier matters more than building for the models you have today
- What to keep for yourself when you've automated everything else
My biggest takeaways:
Reasoning beats our engineering when it comes to designing systems.
Mike entered a hackathon against 13,000 applicants, including engineers from NVIDIA and Apple. He won. He still cannot read a line of code.
I've been saying for a while that lawyers are underestimating themselves in this moment. Mike's story is probably the clearest illustration I've come across. What got him through wasn't technical knowledge. It was the ability to read API documentation the same way he used to find relevant case law, break a complex problem into its parts, and figure out what question to ask next when something broke.
"API documentation is more interesting than what we read for three years at law school. Lawyers already know how to go into realms they don't understand and find the relevant information really well. It's not math. It's reasoning."
I hear from a lot of lawyers who feel like they're on the outside of this. Mike is a useful counter to that. The skills you've spent years building are exactly the ones that matter here. The main thing is the willingness to start.
Don't build for the models you have. Build for the ones coming.
Mike started building Crossbeam when Claude kept encountering errors with blueprint images. Context limits were tight, vision was unreliable, and he was essentially trying to do something the model wasn't quite ready for.
Rather than stopping, he found creative workarounds, breaking large blueprints into pieces the model could actually process.
Then Opus 4.6 dropped during the hackathon. The thing that kept failing suddenly worked.
"Wednesday night I threw the initial permit thing through locally. And then it was like, wait, this didn't error out."
On the morning we recorded, he picked up a 7% performance boost just by switching to a newer model. Nothing else changed.
Boris Cherny, creator of Claude Code, makes this point explicitly: don't build for the models today, build for the models in six months.
What I find interesting about Mike's story is that he didn't set out with that philosophy. He just kept pushing on a problem that felt just out of reach, and the models eventually got there. That gap between what's possible today and what will be possible soon is exactly where I'd want to be spending my time.
Planning is the work.
Mike has a technique he calls adversarial prompting. He builds a plan in one model, then takes it to a second model and asks it to describe what it thinks the plan will actually produce. The gaps between what he intended and what comes back tell him exactly where the plan needs more thought.
"You take that plan, you bring it over to another model. It will basically describe your holes. You take that info back to the original planner. You just keep working on it, loop back and forth between the two agents."
Eight weeks into building Crossbeam, he's clear on what the shortcuts cost him. The corners he cut early are still showing up. His framing is simple: deep planning upfront is almost always cheaper than fixing things downstream. I've found this to be true in my own building, too. The urge to just start is strong, but the time you put into a plan is rarely wasted.
Invest one week to learn.
When I ask people why they haven't started with AI yet, the answer is almost always time. Mike's response to that is worth taking seriously. Pick one task you do regularly that takes ten minutes. For one week, force the AI to do it, even if it takes fifteen. Not everything at once. Just that one thing, on repeat, until you start to feel it click.
"Within a week, you will now be seeing the compounding of the skills. And before you know it, you're gonna start building Skills.md."
What I'd add to that is: start with something low stakes. Mike started with fantasy football. The skills you pick up there, understanding context, learning when to course-correct, and figuring out how to give the model what it needs, transfer directly to the harder problems. You just don't want to learn the basics on something that actually matters yet.
What to keep for yourself.
For all the automation Mike has built, there's one thing he won't hand off. When someone calls after a car accident or gets hurt at work, he picks up.
"I will almost always drop everything to take that call. No matter how good the voice agents get, I'm still going to answer that phone call. Because that's the one part I like about being a lawyer."
What struck me about this is that he built the automation specifically to protect that. All the cron jobs, the morning briefings, and the AI handling admin.
It's about clearing the way for the thing he actually cares about.
I think that's a useful frame for anyone thinking about where AI fits into their work. Not what AI can take over, but what do you want to make more room for?
Resources & Links:
- Tools we discussed
- Cursor — AI-powered IDE. Good starting point if you're new to building with code agents.
- WisprFlow — Voice dictation tool that works across your desktop. Both Mike and I use this daily.
Frameworks and concepts
- RIPER-5 — Mike's recommended framework for context management with AI agents. Research, Innovate, Plan, Execute, Review.
- Adversarial prompting — Mike's technique for stress-testing a plan. Build it in one model, take it to a second model and ask what it thinks the plan will produce. The gaps tell you where to improve.
- IRAC — Issue, Rule, Analysis, Conclusion. The law school reasoning framework that maps directly to effective AI prompting.
- Skills.md — A markdown file that gives your AI agent persistent context about how you work, your preferences, and your style. Worth building early.
People and projects mentioned
- Mike Brown on LinkedIn
- Boris Cherny — Creator of Claude Code. His framing: build for the models six months from now, not the ones you have today.
Events
- Code With Claude — Anthropic's developer conference. Mike spoke on the main stage the day this episode dropped.
Trusted by almost 1,000 practitioners around the world