The 7-area framework for diagnosing operational bottlenecks in companies with 10-100 employees. Specific questions, red flags, and what good looks like.
How to Run an Operations Audit (Step-by-Step Framework for 10-100 Person Companies)
You know something's off.
Revenue's growing, but margins aren't. You added three people last quarter and somehow everyone's still overwhelmed. Clients are getting what they need - mostly - but it takes more effort than it should. Your Monday morning feels like triage.
You can't point to one broken thing. It's more like a dozen things that are each 20% worse than they should be. And those compound.
That's what an operations audit is for. Not to confirm what you already know, but to surface the stuff you can't see because you're inside it every day.
What an Operations Audit Is (and Isn't)
An operations audit is a structured review of how your company actually runs - not how you think it runs, not how the org chart says it should run, but what actually happens on a Tuesday afternoon when a client request comes in.
It is:
- A diagnostic. Like bloodwork before your doctor prescribes anything.
- Time-boxed. A focused audit takes 1-2 weeks, not months.
- Evidence-based. You're looking at data, workflows, and patterns - not opinions.
It is not:
- A blame exercise. This isn't about finding who screwed up.
- A restructuring plan. That comes after.
- An excuse to buy new software. Tools are almost never the root cause.
The output is a map of where your operations are strong, where they're leaking, and which leaks to fix first.
If you want a printable self-assessment version of this, we have an operations audit checklist that pairs well with this framework.
The 7-Area Operations Audit Framework
We've run 50+ of these audits across service businesses, agencies, SaaS companies, and professional firms. Every company is different, but the operational failure points cluster into the same seven areas.
Here's how to evaluate each one.
Area 1: Team Structure & Capacity
The question isn't whether your people are good. It's whether they're doing the right work.
Diagnostic questions:
- Can you list what each person spends 80% of their time on this week? (Not their job description - their actual time.)
- How many people are doing work below their skill level more than 30% of the time?
- If your best performer quit tomorrow, what breaks?
- Do you have people who are "busy" but whose output is hard to measure?
What good looks like:
- Each role has 3-5 clear outcomes they own (not tasks, outcomes)
- No single person is a bottleneck for more than one critical process
- Senior people spend 70%+ of time on senior-level work
- Capacity is tracked weekly, not guessed at
- You can absorb one departure without a crisis
Red flags:
- Your highest-paid people spend significant time on data entry, scheduling, or copy-pasting between tools
- Everyone is "busy" but projects are still late
- You've hired for growth but the new people don't seem to be producing proportional output
- The founder is still the answer to "who handles that?" for more than two functions
If your team structure issues look agency-specific, our agency operations playbook goes deep on utilization and capacity planning.
Area 2: Workflow Efficiency
Every company has workflows. Most companies have never actually mapped them.
Diagnostic questions:
- Pick your three most common workflows (new client setup, invoicing, project delivery). How many steps does each take? How many of those steps are manual?
- Where do things get stuck? Is there a person or a step that's consistently a bottleneck?
- How many times does the same information get entered into different systems?
- What's your average cycle time for core processes? Is it getting better or worse?
What good looks like:
- Core workflows are documented (even if just as a simple checklist)
- Manual steps exist only where human judgment is genuinely needed
- Handoffs between people are clear: who passes what to whom, and when
- Cycle times are measured and trending downward
- Exceptions are handled by a process, not by interrupting someone senior
Red flags:
- Your onboarding process has more than 15 manual steps
- People regularly say "let me check on that" because there's no visibility into where things stand
- The same information lives in email, a spreadsheet, a project tool, and someone's head
- You've had the same bottleneck for six months and worked around it instead of fixing it
- Process knowledge lives in people, not in documented systems
Area 3: Tool Stack
The average company with 10-100 employees uses 12-25 SaaS tools. That's not necessarily a problem. The problem is when those tools don't talk to each other.
Diagnostic questions:
- List every tool your company pays for. (Most founders can't do this from memory - that's a data point.)
- How many of those tools have integrations with each other? How many of those integrations are actually set up?
- Are there tools you're paying for that fewer than 50% of intended users actually use?
- How much time per week does your team spend moving data between tools manually?
What good looks like:
- You have a master list of all tools, their cost, their owner, and their purpose
- Core tools (CRM, project management, finance) are connected
- Data flows automatically between systems for routine processes
- Tool adoption is above 80% for any tool that's been live for 3+ months
- You evaluate tools annually: keep, consolidate, or kill
Red flags:
- You have three tools that do overlapping things because different teams chose different solutions
- "We bought it last year but never fully rolled it out" applies to more than one tool
- Your team uses workarounds (exporting CSVs, manual copy-paste) because integrations don't exist or weren't configured
- Nobody owns the tool stack. Each department bought whatever they wanted.
- You're paying $2,000+/month in SaaS and can't articulate the ROI of half of it
Area 4: Documentation & SOPs
This is the area everyone knows is bad and nobody wants to deal with.
Diagnostic questions:
- If you hired someone into any role tomorrow, is there a written onboarding guide they could follow?
- Do standard operating procedures exist for your top 10 recurring processes?
- When was the last time any SOP was updated? (If the answer is "when it was created," that's a problem.)
- Can a team member find the information they need in under 2 minutes, or do they have to ask someone?
What good looks like:
- SOPs exist for all critical, repeatable processes
- They're stored in one searchable place (not scattered across Google Docs, Notion, and Slack pins)
- They're maintained: reviewed quarterly, updated when processes change
- New hires can get productive in weeks, not months
- Tribal knowledge is captured, not just tolerated
Red flags:
- Your documentation is a graveyard of outdated Google Docs that nobody trusts
- "Ask Sarah, she knows how that works" is the actual SOP for multiple processes
- New hires take 2-3 months to ramp because nothing is written down
- When a process changes, nobody updates the documentation (because nobody looks at it anyway)
- You've "started" a documentation project more than twice and it never stuck
The hard truth: documentation doesn't fail because people are lazy. It fails because there's no system for keeping it current. Build the update trigger into the process itself - when a workflow changes, the person making the change updates the SOP before the task is marked complete.
Area 5: Client Delivery
This is where operational problems become revenue problems.
Diagnostic questions:
- Is your client onboarding experience consistent, or does it depend on who's managing the account?
- What's your average time from signed contract to first deliverable? Is it getting longer?
- Do clients experience the same quality regardless of which team member is doing the work?
- How do you currently measure client satisfaction? (Not "we think they're happy" - actual measurement.)
What good looks like:
- Onboarding follows a standard playbook: every client gets the same great experience
- Time to first value is measured and optimized
- Quality standards are documented and enforced through checklists or reviews
- Client feedback is collected systematically (not just when someone complains)
- Delivery timelines are hit 85%+ of the time
Red flags:
- Client experience varies wildly depending on who runs the account
- You've lost clients and the real reason was operational (missed deadlines, dropped balls, slow responses) - not strategic
- Your best account manager is also your most overworked person
- You don't know your average delivery timeline because you've never measured it
- Clients regularly ask "what's the status?" because you're not proactively communicating
If onboarding is your biggest leak, read our piece on client onboarding automation. Getting from contract to kickoff in 48 hours changes the entire client relationship.
Area 6: Financial Visibility
You'd be surprised how many companies with $2M-$10M in revenue can't tell you their margins by service line.
Diagnostic questions:
- Can you see your profit margin per service line, per client, or per project - in real time?
- Do you know which clients are actually profitable and which are subsidized by the profitable ones?
- How quickly do you know when a project is going over budget? Before it's delivered, or after?
- Can you forecast your cash position 90 days out with reasonable accuracy?
What good looks like:
- Margins are tracked per service line and per client (not just company-wide)
- You have a financial dashboard you actually look at weekly
- Project profitability is visible during the project, not just after
- Revenue forecasting exists and is within 10-15% accuracy
- You can make pricing decisions based on actual cost data, not gut feel
Red flags:
- Your P&L is the only financial report you look at, and it's monthly
- You price based on what competitors charge, not on what it actually costs you to deliver
- You've been surprised by a cash crunch more than once in the last year
- Your best clients might actually be your least profitable, and you'd have no way to know
- Financial data lives in your accountant's system and you see it weeks after the fact
Area 7: Automation Readiness
Not everything should be automated. But a lot more can be than most companies realize.
Diagnostic questions:
- What are the top 5 tasks your team does repeatedly that follow the same steps every time?
- How much time per week does your team spend on data entry, status updates, or moving information between systems?
- Which processes have clear rules (if X, then Y) versus which require genuine human judgment?
- Have you tried automating anything before? What happened?
What good looks like:
- Routine notifications, reminders, and status updates are automated
- Data entry that follows rules is handled by automations, not people
- Client-facing communications (confirmations, reminders, follow-ups) are triggered automatically
- Your team spends their time on judgment, strategy, and relationships - not copying data
- You have a backlog of automation opportunities, prioritized by impact
Red flags:
- A team member spends 5+ hours per week on tasks that follow the exact same steps every time
- You've said "we should automate that" about the same process for six months
- Your "automation" is someone creating a spreadsheet macro or a mail merge
- You bought an automation tool (Zapier, Make, etc.) but only set up 1-2 basic zaps
- Automating feels risky because nobody documented the process well enough to hand it to a machine
What to Do With Your Findings
You've now got a list of problems. Probably a long one. The worst thing you can do is try to fix everything at once.
The Prioritization Matrix
Score each finding on two axes:
HIGH IMPACT
|
Quick | Strategic
Wins | Projects
(Do first) | (Plan & schedule)
|
LOW EFFORT -------+------- HIGH EFFORT
|
Low | Time
Priority | Sinks
(Backlog) | (Avoid for now)
|
LOW IMPACT
How to score:
- Impact: How much does fixing this improve revenue, margins, client experience, or team capacity? (1-5)
- Effort: How much time, money, and disruption does fixing this require? (1-5, where 1 = low effort)
Tackle them in this order:
Priority 1: High impact, low effort (quick wins)
→ These build momentum and free up capacity for bigger changes
→ Timeline: This week / next week
Priority 2: High impact, high effort (strategic projects)
→ These are your 30/60/90 day initiatives
→ Assign an owner, set milestones, track progress
→ Timeline: 30-90 days
Priority 3: Low impact, low effort (backlog)
→ Worth doing eventually, but don't let these distract from Priority 1 and 2
→ Timeline: As capacity allows
Priority 4: High effort, low impact (time sinks)
→ Skip these. Seriously. They feel productive but they aren't.
Common Quick Wins We See
After running these audits consistently, certain quick wins come up in almost every company:
- Consolidate your tools. Kill the redundant ones. Save $500-2,000/month immediately.
- Document your top 3 processes. Not all of them. Just the three that cause the most confusion. Takes a day, saves months.
- Set up one integration. Connect your CRM to your project tool, or your invoicing to your time tracking. One connection that eliminates daily manual work.
- Add one automation. New client signed? Auto-create the project, send the welcome email, notify the team. One workflow that removes 30 minutes of manual setup.
- Start tracking one metric you don't currently track. Utilization rate. Project margins. Average delivery time. You can't improve what you don't measure.
The 30-60-90 Day Plan
Days 1-30: Quick wins
- Fix the obvious stuff
- Document what you find
- Build small momentum
Days 31-60: Structural fixes
- Redesign the broken workflows
- Implement the tool changes
- Start measuring what matters
Days 61-90: Optimization
- Refine based on data
- Automate the now-documented processes
- Build the habits that sustain the improvements
What to Read Next
Once you've completed your audit, these guides will help you act on what you found:
One Last Thing
You can run this audit yourself. Many founders do. Block two hours, work through the seven areas, be honest about what you find, and start with the quick wins.
The framework works. The questions are the right ones.
But there's a difference between running your first audit and running your fiftieth. Pattern recognition matters. The founder who's inside the business every day often can't see the structural issues because they've adapted to them. That's not a weakness - it's human.
If you want someone who's done 50+ of these and can spot the patterns in a week instead of a month - that's what we do.