The Three-Axis Competency Model
Comprehensive framework combining technical depth, business context, and agency excellence to evaluate consulting capability holistically.
Executive summary
- Competency in consulting is multi-dimensional — technical skill alone doesn't predict delivery success
- The three-axis model evaluates Technical (0-4) × Business (0-4) × Agency (1-5) to produce a holistic competency score
- High technical depth with low business context creates client frustration; high agency with low technical depth creates delivery risk
- Use this model to assess internal talent, external hires, and partners on a comparable scale
- Typical successful consultant profiles: (Technical 3, Business 2, Agency 4) or (Technical 2, Business 3, Agency 4)
Definitions
Competency Model: A structured framework for evaluating capability across multiple dimensions, producing a holistic assessment that predicts delivery success better than any single dimension alone.
The Three Axes:
- Technical (0-4): Mastery of domain-specific skills and knowledge
- Business (0-4): Understanding of how work connects to business outcomes
- Agency (1-5): Degree of ownership over problems and solutions
What this includes: Observable, assessable dimensions that predict delivery performance, client satisfaction, and project profitability.
What this does NOT include: Personality traits, cultural fit, leadership style, or non-work factors.
Key distinction: This model measures delivery capability, not potential, likability, or seniority. Someone can be senior by title but score low on all three axes.
Why this matters
Business impact
The three-axis model solves critical business problems:
Problem 1: Hiring mismatches
- Symptom: Strong technical interview, poor delivery performance
- Root cause: Hired for Technical only, ignored Business and Agency
- Cost: Rework, client escalations, potential contract loss
- Fix: Assess all three axes before hiring
Problem 2: Delivery risk from misalignment
- Symptom: Project delays, scope creep, client dissatisfaction
- Root cause: Technical depth doesn't match work complexity, or low agency creates bottlenecks
- Cost: Margin erosion, team burnout, reputation damage
- Fix: Match competency profile to work requirements
Problem 3: Wasted budget on "wrong" seniority
- Symptom: Expensive senior engineer produces junior-level output
- Root cause: Seniority ≠ competency; inflated titles without capability
- Cost: Paying senior rates for junior delivery
- Fix: Assess actual competency, not resume claims
Value of multi-axis assessment
Organizations using this model report:
- Fewer hiring mistakes — better 6-month retention and performance outcomes when assessing all three axes upfront
- Improved project margins — less rework and better staffing decisions when competency profiles match work requirements
- Higher client satisfaction — fewer escalations and better client communication when business context and agency are assessed alongside technical skills
- Stronger — systematic competency assessment identifies ready-now successors for critical roles
Typical role profile distribution — CaseCo Mid illustrative data (scores normalised 0–1)
| Junior Cloud | Cloud Arch | Data Sci | Delivery Mgr | AI Eng | |
|---|---|---|---|---|---|
| Technical | 50 % | 88 % | 75 % | 50 % | 88 % |
| Business | 25 % | 63 % | 50 % | 88 % | 63 % |
| Agency | 25 % | 75 % | 63 % | 75 % | 75 % |
The Model: Three Axes
How it works
The scoring mechanism
Step 1: Assess each axis independently
- Technical: Use domain-specific exercises and work samples
- Business: Use stakeholder interaction examples and trade-off discussions
- Agency: Use behavioral interviews and reference checks
Step 2: Normalize Agency to 0-1 scale
AgencyNorm = (Agency - 1) / 4
Examples:
- Agency 1 → 0.00
- Agency 3 → 0.50
- Agency 5 → 1.00
Step 3: Apply weights and calculate score
Competency Score = (Technical × 0.5) + (Business × 0.2) + (AgencyNorm × 0.3)
Maximum possible: 4.0
Minimum possible: 0.0
Why these weights?
Technical (50%): Foundation of delivery capability
- Can't solve problems you lack skills to execute
- Highest single predictor of "can they do the work?"
- Justifies higher weight
Agency (30%): Multiplier of effectiveness
- High agency makes teams more efficient (less management overhead)
- Low agency creates bottlenecks regardless of technical skill
- Critical for consulting where autonomy is expected
Business (20%): Differentiation factor
- Separates consultants from contractors
- Critical for client satisfaction but not all roles need high levels
- Can be developed faster than technical depth
Customization: Adjust weights based on role. Internal engineers may need Technical 60%, Business 10%, Agency 30%. Client-facing consultants may need Technical 40%, Business 30%, Agency 30%.
Scores are directional, not verdicts
The score gives you a starting point — not a hiring decision.
A 2.1 and a 2.2 are not meaningfully different. What matters is where the score sits relative to the role's requirements, and which axes are driving it. A 2.1 built on strong agency is a different hire than a 2.1 built on technical depth with low agency.
Pair every numeric score with a qualitative read:
- What does this person do when the work gets ambiguous?
- Which axis is the limiting factor for this specific role?
- Is that limitation one that develops quickly, or is it foundational?
This applies throughout the framework. On the financial and operational side, numbers can be relatively definitive — cost is cost. On the competency and talent side, numbers clarify direction; judgment closes the decision. Use the score to structure the conversation, not replace it.
Example: CaseCo Mid
Competency assessments were producing inconsistent results — technical leads rated the same engineer differently depending on the team and project context, creating hiring disputes and staffing arguments about who was qualified for what work.
Decision
Adopt the three-axis model with weighted scoring (Technical 50%, Business 20%, Agency 30%) as the single source of truth for all competency decisions: hiring, staffing, and development.
- 1Ran calibration sessions with all team leads to align on what each axis level looks like in practice at CaseCo — using real past examples, not abstract definitions.
- 2Assessed 40 engineers across all three axes using work samples (Technical), stakeholder interaction examples (Business), and reference checks (Agency).
- 3Published role profiles for each capability: minimum viable scores per axis, not just a single weighted threshold.
- 4Required all staffing requests to specify required axis levels rather than seniority or years of experience.
Outcome
Assessment variance dropped significantly across team leads. Staffing disputes reduced by 80% within two quarters. Engineers had clear, evidence-based development paths instead of vague 'get more senior' feedback.
Action: Competency Assessment Worksheet
Use this worksheet to assess candidates or existing team members:
Assessment Template
| Axis | Level | Evidence | Score |
|---|---|---|---|
| Technical | 0-4 | [Work samples, coding exercise, portfolio review] | ___ |
| Business | 0-4 | [Stakeholder examples, trade-off discussions] | ___ |
| Agency | 1-5 | [Behavioral interviews, reference checks] | ___ |
Calculation:
AgencyNorm = (Agency - 1) / 4 = ___
Competency Score = (Technical × 0.5) + (Business × 0.2) + (AgencyNorm × 0.3) = ___
Interpretation:
- < 1.5: Not viable for consulting work
- 1.5-2.0: Junior/mid-level roles with supervision
- 2.0-2.5: Strong mid-level, some senior roles
- 2.5-3.0: Senior/principal level
- > 3.0: Exceptional, rare
Quick Reference: Typical Profiles by Role
| Role Type | Technical | Business | Agency | Score Range |
|---|---|---|---|---|
| Junior Engineer | 1-2 | 1 | 2-3 | 1.0-1.8 |
| Mid Engineer | 2-3 | 1-2 | 3-4 | 1.8-2.4 |
| Senior Engineer | 3-4 | 2-3 | 4-5 | 2.4-3.2 |
| Architect | 3-4 | 3-4 | 4-5 | 2.8-3.6 |
| Consultant | 2-3 | 3-4 | 4-5 | 2.5-3.2 |
| Delivery Manager | 2 | 3-4 | 4-5 | 2.3-2.9 |
Role Assessment Template: Data Engineer
Use this template to assess a data engineer against all three axes. Select the level that best describes current observable behaviour — not aspiration or potential.
How to use: Assess each axis independently using the observable signals below. Record scores in your team matrix. Reassess quarterly or after major project delivery.
Delivery-ready formula (normalised to 0–1):
Delivery-Ready Score = (T × 0.35) + (B × 0.2) + (A × 0.35) + (C × 0.1)
Where:
T = Technical score (0–4), divided by 4 to normalise
B = Business score (0–4), divided by 4 to normalise
A = (Agency score − 1) / 4 to normalise 1–5 range to 0–1
C = Complexity fit (0–4), divided by 4 to normalise
Example scores — CaseCo Mid data engineering team:
| Engineer | Technical | Business | Agency | Complexity Fit | Delivery-Ready Score |
|---|---|---|---|---|---|
| Aino | T2 | B2 | A3 | C2 | 0.54 |
| Oskari | T3 | B2 | A4 | C3 | 0.72 |
| Liisa | T4 | B3 | A4 | C3 | 0.86 |
Download the full role assessment card set (Data Engineer, Cloud Architect, AI/ML Engineer, Delivery Lead) as an Excel template. Link coming soon.
Pitfalls
Over-indexing on Technical, ignoring Business and Agency
Medium riskWhen hiring decisions are driven by coding test results alone, without assessing how someone connects to clients or operates under uncertainty.
Impact
Strong technical performers frustrate clients or consume excessive management time — offsetting their technical contribution.
Waiting for candidates who score high on all three axes
Medium riskWhen hiring managers reject candidates who score 3-3-4 because they are not 4-4-5.
Impact
Roles stay unfilled for months. Team capacity suffers. The 'unicorn' hire is either prohibitively expensive or doesn't exist at the required level.
Weighted average masking a critically low axis
Medium riskWhen a composite score looks acceptable but one axis is fatally low for the role.
Impact
A candidate scoring 4-4-1 (Technical 4, Business 4, Agency 1) shows a weighted score of 2.575 — looks fine — but will fail in any self-directed consulting role.
Assessment labels becoming permanent
Medium riskWhen an engineer assessed as 2-1-3 at hire is still treated as junior 18 months later, despite operating at 3-2-4.
Impact
Under-utilisation of developed talent. People leave for roles that recognise their growth. Development investment is wasted.
Next
- Technical Scale — Deep dive on technical depth assessment
- Business Scale — Deep dive on business context evaluation
- Agency Scale — Deep dive on problem ownership measurement
- Complexity & Experience — Match competency to work complexity
- Competency Matrix & Scoring — Apply the model at portfolio scale
What decisions this enables
- Whether a candidate meets the bar for a specific role based on evidence, not gut feel
- Which axis to prioritise developing in a given engineer's next growth cycle
- Whether to adjust hiring criteria when a role's requirements change
- How to staff a project when no one perfectly matches the required profile
- When to reject a strong technical performer because their agency or business scores create delivery risk