Competency Matrix & Portfolio Scoring
Operational framework for applying competency assessment at portfolio scale—from individual evaluations to organization-wide capability planning.
Executive summary
- Competency matrix operationalizes the three-axis model at portfolio scale — tracking 50-500 people across 5-20 capabilities
- Four interconnected views: Capabilities (requirements), People (assessments), Gaps (supply vs. demand), Decisions (staffing actions)
- Key metrics: Delivery-ready score per capability, utilization by competency level, gap FTE by horizon
- Most firms track headcount only; competency matrix tracks capability readiness, which predicts delivery success better
- Use this framework for quarterly planning, weekly staffing decisions, and hiring prioritization
Definitions
Competency Matrix: Portfolio-scale tracking system combining capability requirements, individual competency assessments, and availability to calculate organization-wide readiness.
Delivery-Ready Score: Composite metric combining competency score (Technical × Business × Agency), complexity fit, and availability to predict whether someone can successfully staff a project.
Gap Analysis: Comparison of supply (current competency inventory) vs. demand (project requirements) to identify hiring, upskilling, or partnering needs.
What this includes: Structured data model, scoring formulas, decision rules for staffing and investment.
What this does NOT include: Performance management, compensation decisions, or career pathing (those use different systems).
Key distinction: This is a planning and staffing tool, not an HR system of record. It answers: "Do we have the capability to deliver this work?"
Why this matters
Business impact
Competency matrix enables:
- Better staffing decisions — match people to projects based on competency, not just availability
- Proactive gap identification — spot capability shortfalls 3-6 months early
- Investment prioritization — know which capabilities to hire for vs. partner
- Risk visibility — flag projects at risk due to competency-complexity mismatches
- Improved — track proportion of required competencies adequately staffed across the portfolio
Without competency matrix:
- Reactive staffing — "who's available?" instead of "who's capable?"
- Surprise gaps — discover capability shortfalls mid-project (too late)
- Wasted hiring — hire for wrong capabilities or wrong levels
- Hidden risk — assign work to insufficient competency, discover failures in delivery
Why competency tracking changes outcomes
The directional effects are consistent:
- Fewer delivery surprises — competency-complexity mismatches are visible before they become client problems, not after
- Earlier gap identification — a quarterly portfolio review can spot a capability shortfall 3-6 months before it creates a staffing crisis
- Better use of senior capacity — when you know who is genuinely delivery-ready (not just available), you stop over-relying on the same 3 senior FTEs for everything
- Cleaner hiring logic — gap analysis tells you which capability to hire for, not just how many people you need
The Framework: Four Views
Portfolio readiness by capability — CaseCo Mid Q1 assessment (illustrative)
| Cloud Arch | Data Eng | AI/ML | Cloud Infra | Governance | |
|---|---|---|---|---|---|
| Technical | 82 % | 78 % | 45 % | 72 % | 50 % |
| Business | 70 % | 65 % | 40 % | 60 % | 55 % |
| Agency | 75 % | 72 % | 38 % | 65 % | 48 % |
| Complexity Fit | 80 % | 70 % | 35 % | 70 % | 45 % |
How it works
View 1: Capabilities (Requirements)
Define required competency profiles for each capability.
Example: Cloud Architecture
| Field | Value | Rationale |
|---|---|---|
| Capability | Cloud Architecture | Service line |
| Required Technical | 3-4 | Must handle complex multi-cloud designs |
| Required Business | 2-3 | Client-facing, must translate tech to business |
| Required Agency | 4-5 | High autonomy, minimal supervision |
| Required Complexity | 3 | Typical projects are enterprise-scale |
| Classification | Core | Competitive differentiator, chronic demand |
Example: Project Management
| Field | Value | Rationale |
|---|---|---|
| Capability | Project Management | Delivery support |
| Required Technical | 1 | Basic technical fluency sufficient |
| Required Business | 2-3 | Stakeholder coordination critical |
| Required Agency | 3 | Needs independence but not extreme autonomy |
| Required Complexity | 2 | Standard PM processes apply |
| Classification | Contextual | Expected but not differentiating |
View 2: People (Assessments)
Assess each person's competency per capability.
Example: Sarah (Cloud Engineer)
| Capability | Technical | Business | Agency | Complexity Experience | Availability | Delivery-Ready Score |
|---|---|---|---|---|---|---|
| Cloud Architecture | 2 | 1 | 3 | 2 | 1.0 | 1.13 |
| Data Engineering | 1 | 1 | 3 | 1 | 1.0 | 0.63 |
| DevOps | 3 | 2 | 4 | 3 | 1.0 | 2.03 |
Calculations:
- Cloud Architecture Competency Score:
(2 × 0.5) + (1 × 0.2) + (0.5 × 0.3) = 1.35 - Complexity Fit (Cloud):
MIN(1, 2/3) = 0.67 - Agency Gate (Cloud):
IF(3 >= 4, 1, 0) = 0❌ (Fails agency requirement) - Delivery-Ready Score (Cloud):
1.35 × 0.67 × 0 × 1.0 = 0❌
Interpretation: Sarah has decent technical skills but the agency gap (3 vs. required 4) means she'll need closer oversight on Cloud Architecture work than the role allows for — client-facing problems will require escalation instead of autonomous resolution. She's genuinely better suited for DevOps work where she scores 2.03 and the agency bar is lower.
On the agency gate: The binary 0/1 output is intentional — it surfaces a hard risk, not a gradient. In practice, if Sarah were the best available option for a Cloud Architecture engagement, the gate flags the gap so it can be managed deliberately: paired with a senior, shorter delivery window, specific escalation path agreed upfront. The gate doesn't prohibit the assignment. It forces the question: do we understand and accept this risk, and have we structured for it?
View 3: Capability Summary (Portfolio View)
Aggregate readiness by capability.
Example Portfolio
| Capability | Required Profile | Total Ready FTE | Avg Competency | Current Utilization | Status |
|---|---|---|---|---|---|
| Cloud Architecture | T3-4, B2-3, A4-5 | 18.5 | 2.65 | 95% | ⚠️ Under-capacity |
| Data Engineering | T3, B2, A4 | 12.2 | 2.40 | 88% | ✓ Healthy |
| Cybersecurity | T3, B2, A4 | 6.8 | 2.55 | 72% | ✓ Healthy |
| Project Management | T1, B2-3, A3 | 8.4 | 1.85 | 85% | ⚠️ Over-invested |
| Frontend Dev | T2-3, B1, A3 | 15.7 | 2.10 | 92% | ✓ Healthy |
Insights:
- Cloud Architecture: 18.5 ready FTE, 95% utilized → only 0.9 FTE bench → under-capacity, hire urgently
- Project Management: 8.4 ready FTE, 85% utilized → 1.3 FTE bench, but PM is contextual → over-invested, reduce internal team
- Data Engineering: 12.2 ready FTE, 88% utilized → 1.5 FTE bench → healthy capacity
View 4: Gap Analysis
Compare demand forecast vs. supply.
Example: Q2 2026 Demand
| Capability | Demand (FTE) | Supply (Ready FTE) | Gap | Horizon | Sourcing Decision |
|---|---|---|---|---|---|
| Cloud Arch | 22 | 18.5 | -3.5 | Immediate | Partner (2) + Activate Stash (2) |
| Data Eng | 14 | 12.2 | -1.8 | Short-term | Partner (2) |
| Security | 8 | 6.8 | -1.2 | Chronic | Selective Build (1) + Partner (1) |
| PM | 6 | 8.4 | +2.4 | N/A | Reduce internal team, use partners |
Actions:
- Cloud: Immediate gap → engage partners, activate stash
- Data: Short-term gap → partner for Q2, consider hiring if demand stays high in Q3
- Security: Chronic gap → hire 1 FTE, establish partner relationship
- PM: Surplus → phase out 2 internal PMs over 12 months
Example: CaseCo Mid
Quarterly competency matrix review reveals an AI/ML readiness score of 0.35–0.45 across all four dimensions — well below the 0.65 threshold for confident project staffing. In practice, a 0.35 readiness score means the team can contribute to well-scoped analytics work under supervision, but cannot lead a novel ML project without creating delivery risk. CaseCo has been declining AI engagements or staffing them with partners it can't properly quality-manage.
Decision
Classify AI/ML as under-capacity, approve 2 hires and expand the partner network for AI engineering specifically. Set a Q3 target readiness score of 0.65.
Outcome
AI/ML readiness improved from 0.35-0.45 to 0.65 by Q3. CaseCo closed 2 AI projects it would previously have declined or handed to partners without the internal competency to manage delivery quality. Revenue attributable to AI/ML capability grew by 18% year-on-year.
Action: Competency Matrix Implementation
Implementation Checklist
Phase 1: Setup (Month 1)
- Define 5-10 key capabilities
- Set required competency profiles per capability (T/B/A/Complexity)
- Classify capabilities (Core / Strategic / Contextual)
- Build spreadsheet or database schema
Phase 2: Assessment (Month 2-3)
- Assess 50-100% of billable team (start with revenue-critical roles)
- Calculate delivery-ready scores
- Identify assessment gaps (people not yet assessed)
Phase 3: Analysis (Month 3)
- Generate capability summary (ready FTE, utilization, avg competency)
- Forecast demand for next 2 quarters
- Run gap analysis
- Prioritize actions (hire / partner / upskill)
Phase 4: Operationalize (Month 4+)
- Quarterly portfolio reviews
- Monthly utilization tracking
- Weekly staffing decisions use delivery-ready scores
- Continuous assessment updates (new hires, promotions, skill development)
Spreadsheet Template (Google Sheets / Excel)
Sheet 1: Capabilities
| Capability | Req_Tech | Req_Business | Req_Agency | Req_Complexity | Classification |
|---|---|---|---|---|---|
| Cloud Architecture | 3-4 | 2-3 | 4-5 | 3 | Core |
| Data Engineering | 3 | 2 | 4 | 3 | Core |
| ... |
Sheet 2: People_Assessments
| Person | Capability | Tech | Business | Agency | Complexity_Exp | Availability |
|---|---|---|---|---|---|---|
| Sarah | Cloud Arch | 2 | 1 | 3 | 2 | 1.0 |
| Marcus | Cloud Arch | 3 | 2 | 4 | 3 | 1.0 |
| ... |
Sheet 3: Computed_Scores (calculated columns)
| Person | Capability | Agency_Norm | Competency_Score | Complexity_Fit | Agency_Gate | Delivery_Ready_Score |
|---|---|---|---|---|---|---|
| Sarah | Cloud Arch | =(C-1)/4 | =(Tech*0.5)+(Bus*0.2)+(AgNorm*0.3) | =MIN(1, Exp/Req) | =IF(Agency>=ReqA,1,0) | =Comp*Fit*Gate*Avail |
Sheet 4: Capability_Summary
| Capability | Total_Ready_FTE | Avg_Competency | Utilization | Status |
|---|---|---|---|---|
| Cloud Arch | =SUMIF(...) | =AVERAGE(...) | =Allocated/Ready*100 | =IF(Util>90,"⚠️","✓") |
Pitfalls
Assessment theatre — collecting scores but not using them
Medium riskWhen the competency matrix exists and shows green/yellow/red, but staffing decisions still default to 'who's available' without reference to scores.
Impact
Wasted effort building and maintaining the matrix. Staffing decisions remain as bad as before. Competency-complexity mismatches persist. People lose trust in the system.
Stale assessments undermining matrix credibility
Medium riskWhen assessments are 18+ months old and people have grown, changed roles, or left — but the matrix still shows their old scores.
Impact
Matrix becomes actively misleading. Decisions made on stale data. People lose confidence in the tool and stop using it.
Over-indexing on Technical score, ignoring Business and Agency
Medium riskWhen staffing decisions are made by looking at Technical scores only, with Business and Agency as afterthoughts.
Impact
Projects staffed with technically strong, low-agency or low-business-context engineers. Delivery bottlenecks, client frustration, management overhead unexpectedly high.
Gap identified but no action owner or timeline
Medium riskWhen gap analysis shows a -5 FTE shortage in Cloud Architecture, it gets noted in a quarterly review, and nothing happens for 3 months.
Impact
Identified gaps do not get resolved. Revenue opportunities lost. Delivery risk remains. The gap analysis becomes a documentation exercise with no operational consequence.
Next
- Competency Model — Understand the three-axis scoring framework
- Complexity & Experience — Match complexity to competency
- Core vs Contextual — Classify capabilities before investment
- Staffing Gate — Use delivery-ready scores in weekly staffing decisions
What decisions this enables
- Whether a specific engineer can be assigned to a project without a competency-complexity mismatch
- Which capabilities to prioritise in the next hiring cycle based on readiness gap data
- When to partner versus hire based on a gap's classification and time horizon
- How to present capability readiness to the board in a format that connects to revenue risk
- Whether a quarterly portfolio review warrants any changes to hiring plans or partner investments