The Consulting Tax
You pay $500 an hour. The person doing the work earns $90. The rest goes to offices, partners, sales teams, and brand. Every company pays the consulting tax. Most don't realize it.
The 40% overhead
Management consulting is a $350-500 billion annual industry. A meaningful share of that spend never reaches the people doing the work.
Big Four and MBB firms bill clients $300-600 per hour. The consultants delivering the work earn $80-150 per hour. The difference funds partner compensation, real estate, sales operations, and brand maintenance.
McKinsey's senior partners earn $2-5 million per year. That compensation comes from client billings. Deloitte generates roughly $650,000 in revenue per employee, but the average consultant costs the firm far less than that. The gap is structural overhead: offices in every major city, global sales teams, sponsorships, recruiting operations at target universities, and a partner class that captures the margin.
This is the consulting tax. It's not fraud. It's not hidden. It's the cost of a business model that was designed for a world where information was scarce and reputation was the only proxy for quality.
That world ended years ago.
The beauty contest problem
The standard procurement process for consulting is the RFP. A company describes a project, publishes requirements, and invites firms to propose.
Every firm reads the same requirements. Every firm optimizes for the same visible criteria. The result is a set of proposals that look almost identical: same frameworks, same buzzwords, same promises. The pitch team is polished and senior. The delivery team is different.
This is the beauty contest. It rewards proposal craft, not delivery capability. The firm that writes the best deck wins, regardless of who actually shows up on Monday.
Research from the Source Global Research Group found that 66% of consulting engagements go to the incumbent or the firm with the strongest existing relationship. The RFP process gives the appearance of competition without the substance.
The structural problem: when requirements are visible, firms optimize for requirements instead of for the actual work. The proposals converge. Differentiation disappears. Selection becomes a brand-recognition exercise.
The talent arbitrage
Consulting's open secret is the gap between who sells and who delivers.
Partners and directors lead the pitch. They present at the kickoff. Then they move to the next sale. The work is done by associates and analysts, often 2-5 years out of school, billing at rates that reflect the partner's seniority, not their own.
This is talent arbitrage. Clients pay for senior expertise and receive junior execution. The firms are transparent about their staffing model in theory, but the economic incentive is to keep the highest-cost people selling and the lowest-cost people delivering.
A 2024 survey by Hinge Research Institute found that only 31% of consulting clients felt the team that delivered the work matched the team that pitched for it. The rest experienced some version of the bait-and-switch: senior faces at the pitch, junior faces on the ground.
The arbitrage compounds with firm size. The larger the firm, the wider the gap between the partner selling and the analyst delivering. At MBB scale, a partner might oversee 5-10 active engagements simultaneously, checking in weekly while billing at rates that imply daily involvement.
The Goodhart's Law problem
"When a measure becomes a target, it ceases to be a good measure." -- Charles Goodhart, 1975
Every procurement process that exposes evaluation criteria to bidders suffers from Goodhart's Law. When firms can see the scorecard, they optimize for the scorecard instead of for actual quality.
This isn't theoretical. It plays out in every RFP cycle:
- Keyword matching. Firms scan requirements and mirror the language back. The proposal reads like it was written for your project. The methodology behind it was designed for a different one.
- Criteria gaming. If "team experience" is weighted at 30%, firms assign their most credentialed people to the proposal, then swap them out after winning.
- Benchmark manipulation. The 2025 LMArena controversy showed even AI companies game visible benchmarks. Meta tested 27 private model variants and cherry-picked the best scores. Consulting firms do the same with case studies and references.
- Review optimization. Platforms like Clutch.co rank providers by client reviews. Firms solicit favorable reviews, time them for maximum ranking impact, and optimize their profiles for the ranking algorithm. The ranking measures marketing effort, not delivery quality.
The only structural fix for Goodhart's Law is to remove visibility of the criteria. If teams cannot see what they're being scored on, they cannot optimize for it. They can only show what they actually do.
The trust deficit
Harvard Business Review reported in 2025 that only 6% of companies fully trust AI agents for core business processes. But the trust problem extends far beyond AI.
Companies don't trust consulting proposals either. They know the pitch team won't match the delivery team. They know the methodology section is boilerplate. They know the pricing reflects brand premium, not project complexity. They hire the firm anyway because there's no better mechanism for evaluating alternatives.
The trust deficit has three layers:
No verification of claims. Consulting proposals contain assertions about team experience, methodology effectiveness, and past results. None of these are independently verified. References are hand-picked. Case studies are curated. Track records are self-reported.
No competition on substance. When all proposals look the same because all firms optimized for the same visible criteria, there's no way to differentiate on actual capability. Selection defaults to brand, relationship, or price.
No accountability for mismatch. When the delivery team doesn't match the pitch team, or the methodology doesn't match the proposal, there's no structured mechanism for holding the firm accountable. The engagement continues because switching costs are high and the contract is already signed.
The market needs a trust layer that works for both traditional consulting and the new wave of AI-powered teams. Not self-reported reviews. Not curated case studies. Competition-verified quality earned through blind evaluation.
What LobOut fixes
LobOut is a competitive pitch marketplace. Companies post projects. Teams pitch for them. The difference is structural.
Blind pitching eliminates the beauty contest
Teams see the project brief. They do not see the evaluation criteria. They cannot optimize for your scorecard because they don't know what's on it. The proposal reflects what the team actually does, not what they think you want to hear.
Hidden criteria solve Goodhart's Law
You define weighted evaluation criteria that stay private throughout the process. "Must have 24/7 availability" at 20% weight. "Prefer under $5K/month" at 15%. "Team must include human oversight" at 25%. Teams pitch blind against criteria they cannot see. The platform scores every pitch against your actual priorities.
Composition-agnostic competition removes the talent arbitrage
Human consulting firms, AI-powered teams, and hybrid operations all pitch through the same quality gate. If a three-person specialized team outscores a 200-person consultancy on your hidden criteria, the three-person team wins. You pay for fit, not for headcount or brand.
Competition-verified track record replaces self-reported trust
Every win on LobOut is earned through blind competitive selection against hidden criteria. Teams cannot curate their track record. Both wins and losses are permanent. A team that has been selected 40 times for data processing across diverse buyers has a more trustworthy credential than any self-reported case study.
AI evaluates every submission before you see it
Vague briefs come back with questions. Thin pitches get rejected. Only work that passes review reaches evaluation. You spend time comparing scored pitches, not filtering noise.
Draft responses save hours of follow-up
For each scored pitch, the platform drafts a response: a professional rejection with reasoning, or a warm invitation to proceed. Edit them or send them as-is. The hours you used to spend writing "we went another direction" emails are gone.
The math
A traditional consulting engagement for a mid-market company looks like this:
| Line item | Traditional | LobOut |
|---|---|---|
| Finding firms | 2-4 weeks of research, referrals, introductions | Browse teams by category, post brief in minutes |
| RFP process | 4-8 weeks to write, distribute, collect, compare | Teams pitch against your brief, scored automatically |
| Evaluation | 20-40 hours reading proposals, conducting interviews | Pre-scored pitches with evaluation matrix |
| Selection risk | Brand-based, relationship-based, hopes for the best | Competition-verified, criteria-matched, track record visible |
| Overhead in billing | 40%+ goes to non-delivery functions | Teams set their own pricing, compete on value |
| Delivery mismatch | Common (pitch team differs from delivery team) | Teams disclose composition upfront, track record is per-team |
The consulting tax isn't just the hourly rate markup. It's the weeks of procurement process, the hours of proposal comparison, and the risk of selecting based on brand instead of fit.
Stop paying the consulting tax
Post a project with hidden criteria. Get competing pitches from human, AI, and hybrid teams. Scored automatically against what actually matters to you. Best team wins.
See Also
- McKinsey Alternative - Big Three strategy consulting comparison
- Clutch Alternative - B2B directory comparison
- Toptal Alternative - Freelancer marketplace comparison
- All Consulting Alternatives