Competency Matrix & Portfolio Scoring

Operational framework for applying competency assessment at portfolio scale—from individual evaluations to organization-wide capability planning.

13 min read

Executive summary

  • Competency matrix operationalizes the three-axis model at portfolio scale — tracking 50-500 people across 5-20 capabilities
  • Four interconnected views: Capabilities (requirements), People (assessments), Gaps (supply vs. demand), Decisions (staffing actions)
  • Key metrics: Delivery-ready score per capability, utilization by competency level, gap FTE by horizon
  • Most firms track headcount only; competency matrix tracks capability readiness, which predicts delivery success better
  • Use this framework for quarterly planning, weekly staffing decisions, and hiring prioritization

Definitions

Competency Matrix: Portfolio-scale tracking system combining capability requirements, individual competency assessments, and availability to calculate organization-wide readiness.

Delivery-Ready Score: Composite metric combining competency score (Technical × Business × Agency), complexity fit, and availability to predict whether someone can successfully staff a project.

Gap Analysis: Comparison of supply (current competency inventory) vs. demand (project requirements) to identify hiring, upskilling, or partnering needs.

What this includes: Structured data model, scoring formulas, decision rules for staffing and investment.

What this does NOT include: Performance management, compensation decisions, or career pathing (those use different systems).

Key distinction: This is a planning and staffing tool, not an HR system of record. It answers: "Do we have the capability to deliver this work?"


Why this matters

Business impact

Competency matrix enables:

  • Better staffing decisions — match people to projects based on competency, not just availability
  • Proactive gap identification — spot capability shortfalls 3-6 months early
  • Investment prioritization — know which capabilities to hire for vs. partner
  • Risk visibility — flag projects at risk due to competency-complexity mismatches
  • Improved — track proportion of required competencies adequately staffed across the portfolio

Without competency matrix:

  • Reactive staffing — "who's available?" instead of "who's capable?"
  • Surprise gaps — discover capability shortfalls mid-project (too late)
  • Wasted hiring — hire for wrong capabilities or wrong levels
  • Hidden risk — assign work to insufficient competency, discover failures in delivery

Why competency tracking changes outcomes

The directional effects are consistent:

  • Fewer delivery surprises — competency-complexity mismatches are visible before they become client problems, not after
  • Earlier gap identification — a quarterly portfolio review can spot a capability shortfall 3-6 months before it creates a staffing crisis
  • Better use of senior capacity — when you know who is genuinely delivery-ready (not just available), you stop over-relying on the same 3 senior FTEs for everything
  • Cleaner hiring logic — gap analysis tells you which capability to hire for, not just how many people you need

The Framework: Four Views

Portfolio readiness by capability — CaseCo Mid Q1 assessment (illustrative)

Cloud ArchData EngAI/MLCloud InfraGovernance
Technical82 %78 %45 %72 %50 %
Business70 %65 %40 %60 %55 %
Agency75 %72 %38 %65 %48 %
Complexity Fit80 %70 %35 %70 %45 %
Low readinessMediumHigh readiness

How it works

View 1: Capabilities (Requirements)

Define required competency profiles for each capability.

Example: Cloud Architecture

FieldValueRationale
CapabilityCloud ArchitectureService line
Required Technical3-4Must handle complex multi-cloud designs
Required Business2-3Client-facing, must translate tech to business
Required Agency4-5High autonomy, minimal supervision
Required Complexity3Typical projects are enterprise-scale
ClassificationCoreCompetitive differentiator, chronic demand

Example: Project Management

FieldValueRationale
CapabilityProject ManagementDelivery support
Required Technical1Basic technical fluency sufficient
Required Business2-3Stakeholder coordination critical
Required Agency3Needs independence but not extreme autonomy
Required Complexity2Standard PM processes apply
ClassificationContextualExpected but not differentiating

View 2: People (Assessments)

Assess each person's competency per capability.

Example: Sarah (Cloud Engineer)

CapabilityTechnicalBusinessAgencyComplexity ExperienceAvailabilityDelivery-Ready Score
Cloud Architecture21321.01.13
Data Engineering11311.00.63
DevOps32431.02.03

Calculations:

  • Cloud Architecture Competency Score: (2 × 0.5) + (1 × 0.2) + (0.5 × 0.3) = 1.35
  • Complexity Fit (Cloud): MIN(1, 2/3) = 0.67
  • Agency Gate (Cloud): IF(3 >= 4, 1, 0) = 0 ❌ (Fails agency requirement)
  • Delivery-Ready Score (Cloud): 1.35 × 0.67 × 0 × 1.0 = 0

Interpretation: Sarah has decent technical skills but the agency gap (3 vs. required 4) means she'll need closer oversight on Cloud Architecture work than the role allows for — client-facing problems will require escalation instead of autonomous resolution. She's genuinely better suited for DevOps work where she scores 2.03 and the agency bar is lower.

On the agency gate: The binary 0/1 output is intentional — it surfaces a hard risk, not a gradient. In practice, if Sarah were the best available option for a Cloud Architecture engagement, the gate flags the gap so it can be managed deliberately: paired with a senior, shorter delivery window, specific escalation path agreed upfront. The gate doesn't prohibit the assignment. It forces the question: do we understand and accept this risk, and have we structured for it?


View 3: Capability Summary (Portfolio View)

Aggregate readiness by capability.

Example Portfolio

CapabilityRequired ProfileTotal Ready FTEAvg CompetencyCurrent UtilizationStatus
Cloud ArchitectureT3-4, B2-3, A4-518.52.6595%⚠️ Under-capacity
Data EngineeringT3, B2, A412.22.4088%✓ Healthy
CybersecurityT3, B2, A46.82.5572%✓ Healthy
Project ManagementT1, B2-3, A38.41.8585%⚠️ Over-invested
Frontend DevT2-3, B1, A315.72.1092%✓ Healthy

Insights:

  • Cloud Architecture: 18.5 ready FTE, 95% utilized → only 0.9 FTE bench → under-capacity, hire urgently
  • Project Management: 8.4 ready FTE, 85% utilized → 1.3 FTE bench, but PM is contextual → over-invested, reduce internal team
  • Data Engineering: 12.2 ready FTE, 88% utilized → 1.5 FTE bench → healthy capacity

View 4: Gap Analysis

Compare demand forecast vs. supply.

Example: Q2 2026 Demand

CapabilityDemand (FTE)Supply (Ready FTE)GapHorizonSourcing Decision
Cloud Arch2218.5-3.5ImmediatePartner (2) + Activate Stash (2)
Data Eng1412.2-1.8Short-termPartner (2)
Security86.8-1.2ChronicSelective Build (1) + Partner (1)
PM68.4+2.4N/AReduce internal team, use partners

Actions:

  1. Cloud: Immediate gap → engage partners, activate stash
  2. Data: Short-term gap → partner for Q2, consider hiring if demand stays high in Q3
  3. Security: Chronic gap → hire 1 FTE, establish partner relationship
  4. PM: Surplus → phase out 2 internal PMs over 12 months

Example: CaseCo Mid

CaseCo Mid (data & AI consultancy, 350 billable people)

Quarterly competency matrix review reveals an AI/ML readiness score of 0.35–0.45 across all four dimensions — well below the 0.65 threshold for confident project staffing. In practice, a 0.35 readiness score means the team can contribute to well-scoped analytics work under supervision, but cannot lead a novel ML project without creating delivery risk. CaseCo has been declining AI engagements or staffing them with partners it can't properly quality-manage.

Decision

Classify AI/ML as under-capacity, approve 2 hires and expand the partner network for AI engineering specifically. Set a Q3 target readiness score of 0.65.

Outcome

AI/ML readiness improved from 0.35-0.45 to 0.65 by Q3. CaseCo closed 2 AI projects it would previously have declined or handed to partners without the internal competency to manage delivery quality. Revenue attributable to AI/ML capability grew by 18% year-on-year.


Action: Competency Matrix Implementation

Implementation Checklist

Phase 1: Setup (Month 1)

  • Define 5-10 key capabilities
  • Set required competency profiles per capability (T/B/A/Complexity)
  • Classify capabilities (Core / Strategic / Contextual)
  • Build spreadsheet or database schema

Phase 2: Assessment (Month 2-3)

  • Assess 50-100% of billable team (start with revenue-critical roles)
  • Calculate delivery-ready scores
  • Identify assessment gaps (people not yet assessed)

Phase 3: Analysis (Month 3)

  • Generate capability summary (ready FTE, utilization, avg competency)
  • Forecast demand for next 2 quarters
  • Run gap analysis
  • Prioritize actions (hire / partner / upskill)

Phase 4: Operationalize (Month 4+)

  • Quarterly portfolio reviews
  • Monthly utilization tracking
  • Weekly staffing decisions use delivery-ready scores
  • Continuous assessment updates (new hires, promotions, skill development)

Spreadsheet Template (Google Sheets / Excel)

Sheet 1: Capabilities

CapabilityReq_TechReq_BusinessReq_AgencyReq_ComplexityClassification
Cloud Architecture3-42-34-53Core
Data Engineering3243Core
...

Sheet 2: People_Assessments

PersonCapabilityTechBusinessAgencyComplexity_ExpAvailability
SarahCloud Arch21321.0
MarcusCloud Arch32431.0
...

Sheet 3: Computed_Scores (calculated columns)

PersonCapabilityAgency_NormCompetency_ScoreComplexity_FitAgency_GateDelivery_Ready_Score
SarahCloud Arch=(C-1)/4=(Tech*0.5)+(Bus*0.2)+(AgNorm*0.3)=MIN(1, Exp/Req)=IF(Agency>=ReqA,1,0)=Comp*Fit*Gate*Avail

Sheet 4: Capability_Summary

CapabilityTotal_Ready_FTEAvg_CompetencyUtilizationStatus
Cloud Arch=SUMIF(...)=AVERAGE(...)=Allocated/Ready*100=IF(Util>90,"⚠️","✓")

Pitfalls

Assessment theatre — collecting scores but not using them

Medium risk

When the competency matrix exists and shows green/yellow/red, but staffing decisions still default to 'who's available' without reference to scores.

Impact

Wasted effort building and maintaining the matrix. Staffing decisions remain as bad as before. Competency-complexity mismatches persist. People lose trust in the system.

Stale assessments undermining matrix credibility

Medium risk

When assessments are 18+ months old and people have grown, changed roles, or left — but the matrix still shows their old scores.

Impact

Matrix becomes actively misleading. Decisions made on stale data. People lose confidence in the tool and stop using it.

Over-indexing on Technical score, ignoring Business and Agency

Medium risk

When staffing decisions are made by looking at Technical scores only, with Business and Agency as afterthoughts.

Impact

Projects staffed with technically strong, low-agency or low-business-context engineers. Delivery bottlenecks, client frustration, management overhead unexpectedly high.

Gap identified but no action owner or timeline

Medium risk

When gap analysis shows a -5 FTE shortage in Cloud Architecture, it gets noted in a quarterly review, and nothing happens for 3 months.

Impact

Identified gaps do not get resolved. Revenue opportunities lost. Delivery risk remains. The gap analysis becomes a documentation exercise with no operational consequence.


Next


What decisions this enables

  • Whether a specific engineer can be assigned to a project without a competency-complexity mismatch
  • Which capabilities to prioritise in the next hiring cycle based on readiness gap data
  • When to partner versus hire based on a gap's classification and time horizon
  • How to present capability readiness to the board in a format that connects to revenue risk
  • Whether a quarterly portfolio review warrants any changes to hiring plans or partner investments

FAQs