We are back. The New Season of Fringe Legal - called Bots at Work is out now.

In this episode, Jennifer Waite, Chief Knowledge & Innovation Officer at Arnall Golden Gregory, shares her unique journey from librarian to Chief Knowledge and Innovation Officer. She dives into how her early experiences in law and librarianship shaped her approach to knowledge management, technology, and AI in a law firm setting.



In this episode, we discuss:

  1. Jennifer's origin story from law school to librarian to CKIO
  2. Why governance sits alongside innovation at AGG, not under IT or security
  3. What a real vendor partnership looks like, and the vibe check that determines it
  4. The double standard we hold AI to vs. human work
  5. The biggest mistake teams make when it comes to AI adoption

My biggest takeaways:

There's no AI without governance

One of the structural decisions that makes AGG's program different is that information governance sits alongside the innovation team. Most firms put it under IT or security. Jennifer explained why that placement matters.

"There's no AI without governance. Otherwise you're just in the wild west."

The practical benefit is that innovation projects arrive at security reviews with most governance questions already answered. The projects are more secure because they were governed from the start, not secured as an afterthought.

In most organizations, security and innovation pull in opposite directions. That's a structural problem, not a people problem. When governance lives on the security side, its default is caution. When it sits alongside the people building, its default is clarity.


The vendor vibe check

Jennifer was candid about what she looks for when evaluating vendor relationships. Responsiveness matters. Honesty about roadmaps matters more. And when something feels off, it usually is.

"If we're not getting answers back to questions, if there's pushback, if the energy, honestly if the vibes are off — you can tell very early."

What erodes trust is the gap between what is promised and what is delivered. She mentioned vendors who commit to capabilities their software lacks. "It feels a little misleading once we find out." Her expectation is not perfection. It is transparency.

She also raised a point worth sitting with: "I literally have some software we've paid for where I'm like, I could have an intern code this in a weekend." Vendors that are not moving at the pace of the market will start losing deals to internal builds.


The wrong job for AI

The use case that keeps failing is not failing because AI is bad. It's failing because it was the wrong job for AI.

"Everyone will come with, we need automation. And they just need some good old-fashioned Power Automate. Or their data is bad coming in. Here's a PDF, turn it into an Excel, don't miss anything. That's not a good thing to ask AI to do, because it will miss stuff. That's not what its job is."

The pattern is consistent: people blame the model when the issue is the task design. Jennifer adjusts how she handles it based on the user. For those who just want the output and don't care how it works, she doesn't force a lesson. For the curious ones who want to understand, she explains the logic — because those users will keep improving.


The 80% problem

There's a double standard applied to AI that Jennifer named better than I've heard it put before.

"So many briefs filed with judges, so many judicial opinions themselves have errors. And we know that. And we accept it. But I think it's because as a person, we feel more sympathetic towards people for making errors and less sympathetic towards artificial intelligence. Machine, it should be better than us. It should be perfect."

Her pushback on that: "Did this get you to 80% of your draft? Sounds great. Why are we waiting?"

The firms moving fastest on AI are not asking whether the output is perfect. They are asking whether it is good enough to build from.


The biggest mistake. And the most important skill.

On the biggest mistake she sees teams making:

"It's burying the head in the sand. I think any other way you slice it, no matter how you tackle the AI problem, you're probably going to get some benefit out of it. It's the complete avoidance of the problem entirely. Or being like, I'm going to wait until the tools are so good, then I'll adopt. If you waited that long, you don't have the necessary framework or skills. And you've lost all that time where everyone else is now at that level."

The waiting strategy looks conservative. It's the highest-risk position.

On the skills that will matter most going forward, she didn't reach for anything technical.

"It sounds trite and it's been said a lot, but I do agree. It's the human skills. The interpersonal skills, the emotional intelligence. And that curiosity — that desire to learn something new and not be afraid of new things."

And if you're starting from scratch with no resources? Start with one pain point, solve it in front of people, and let the demand build from there.

"They'll email the AI team with a problem. We'll help them with a solution. And then they'll be like, well, guess what, I have five more problems you could probably help with."

Start small. Bite off chunks of the elephant.


Resources & Links:

 Connect with Jennifer Waite: