Technical Depth Scale

Five-level scale measuring technical mastery from foundational awareness to expert-level standard-setting. Calibrated for consulting delivery environments.

11 min read

Executive summary

  • Technical depth measures how well someone executes work in a specific domain — from basic awareness to expert mastery
  • This is a 0-4 scale that is domain-specific: someone can be level 3 in Python and level 1 in Go
  • High technical depth (3-4) reduces rework, improves quality, and enables complex problem-solving
  • Most delivery failures happen when complexity exceeds technical depth (e.g., assigning level 2 work to level 1 talent)
  • Use work samples and technical exercises, not years of experience, to assess depth

Definitions

Technical Depth: Mastery of core technical skills within a specific domain, including foundational concepts, advanced techniques, tool proficiency, and best practices application.

What it includes: Domain-specific knowledge, tool proficiency, best practices understanding, debugging capability, code quality awareness.

What it does NOT include: Business context awareness, problem ownership (agency), communication skills, or management ability.

Key distinction: Technical depth is domain-specific — a cloud architect might be level 4 in AWS but level 1 in cybersecurity. Always specify the domain when assessing.


Why this matters

Business impact

High technical depth (3-4):

  • Reduces rework — gets it right the first time, fewer bugs
  • Enables complex work — can handle ambiguous, multi-system problems
  • Improves delivery speed — knows patterns, doesn't need constant research
  • Reduces technical debt — writes maintainable, scalable code

Low technical depth (0-1) assigned to complex work:

  • Creates delivery risk — high error rate, requires extensive review
  • Slows projects — frequent blockers, needs hand-holding
  • Accumulates debt — quick hacks instead of proper solutions
  • Frustrates teams — senior engineers spend time fixing issues
  • Creates — work complexity exceeds team competency, forcing expensive external help

Cost reality

A level 3 engineer delivers 3-5x the value of a level 1 engineer in the same domain, even if the salary is only 1.5-2x higher.

Why: The level 3 produces working, maintainable solutions. The level 1 produces code that requires senior review, rework, and often rewrites — consuming team capacity.


The Scale (0-4)

Technical level per domain — 5-person sample team, scores normalised 0–1 (level ÷ 4)

Eng AEng BEng CEng DEng E
Cloud75 %100 %50 %25 %88 %
Data50 %75 %88 %50 %75 %
Security25 %50 %75 %100 %50 %
Platform75 %88 %50 %75 %63 %
Low depthMediumHigh depth

How it works

The technical progression

Technical depth develops through cycles of learning, practice, and feedback:

Key mechanism: Pattern recognition + judgment

Technical depth combines:

  1. Foundational knowledge — understanding core concepts and principles
  2. Pattern recognition — knowing solutions to common problems
  3. Tooling proficiency — effective use of frameworks, libraries, and systems
  4. Judgment — when to apply standard patterns vs. custom solutions

Example: A level 2 Python engineer knows how to write a web API. A level 3 engineer knows when to use async vs. sync, how to structure for scale, and what trade-offs exist. A level 4 engineer defines the API framework the team uses.


Example: CaseCo Mid

CaseCo Mid (Cloud & Infrastructure practice, 120 people)

CaseCo Mid wins a Fortune 500 multi-cloud migration (AWS + Azure) — 50+ applications, compliance requirements, high complexity. The PM needs to staff it correctly: one wrong call at the Senior Architect level and the whole project is at risk.

Decision

Score available engineers against required technical depth per role, not seniority or years of experience. Match level 1-2 to runbook-driven work, level 2-3 to independent execution, level 3-4 to architectural decisions.

Outcome

Project delivered on schedule with 15% rework rate (industry benchmark: 35%). The key factor: no level 1-2 engineers were assigned to architectural decisions. All complex problems went to the right level.

Key insight: Match technical depth to work complexity. Overstaffing (level 4 doing level 2 work) wastes money. Understaffing (level 1 doing level 3 work) creates delivery risk.


Action: Technical Assessment Framework

Use this framework for interviews and work sample reviews:

Level 1-2 Assessment (Foundational/Proficient)

Coding exercise (60-90 minutes):

Task: Build a REST API endpoint that accepts a JSON payload, validates it,
stores data in a database, and returns a confirmation.

Requirements:
- Use [team's language/framework]
- Include input validation
- Handle error cases
- Write basic tests

Evaluation:
- Does it work? (Y/N)
- Code quality: readable, follows conventions?
- Error handling: edge cases covered?
- Tests: meaningful coverage?

Scoring:

  • Level 1: Works with prompting, quality issues, incomplete tests
  • Level 2: Works independently, clean code, good tests

Level 2-3 Assessment (Proficient/Expert)

Architecture exercise (90 minutes):

Scenario: Design a real-time notification system for a mobile app with
100K daily active users.

Requirements:
- Users receive push notifications for specific events
- Must handle notification delivery failures
- Should be cost-effective at scale
- Discuss technology choices and trade-offs

Evaluation:
- Architecture: sensible technology choices?
- Trade-offs: identifies pros/cons of approaches?
- Scale: considers costs, failure modes, monitoring?
- Communication: explains decisions clearly?

Scoring:

  • Level 2: Functional design but missing scale/cost considerations
  • Level 3: Comprehensive design with trade-off analysis

Level 3-4 Assessment (Expert/Architect)

Portfolio review + strategic discussion:

Review:
1. Show me a complex system you designed
2. What were the key technical decisions?
3. What trade-offs did you make and why?
4. What would you do differently today?

Strategic questions:
1. How do you evaluate new technologies?
2. Describe a time you prevented a major technical problem
3. How do you balance speed vs. quality?
4. How do you mentor less experienced engineers?

Scoring:

  • Level 3: Deep technical knowledge, solid judgment, mentors effectively
  • Level 4: Strategic vision, sets standards, recognized authority

Pitfalls

Confusing years of experience with technical depth

Medium risk

When hiring managers use tenure as a proxy for capability — '10 years of cloud experience' treated as equivalent to Technical Level 3.

Impact

Hire delivers level 1-2 work at senior rates. Rework, slower delivery, and eventual re-hire when the mismatch becomes visible.

Assuming domain skills transfer automatically

Medium risk

When a level 3 Python engineer is assumed to be level 3 in Go, or an AWS level 3 is assumed to be level 3 in Azure.

Impact

Engineer placed in role requiring depth they don't have in the specific domain. Slower delivery, more errors, need for heavier supervision.

Confusing breadth with depth

Medium risk

When a candidate who 'knows 10 technologies' is assessed as more capable than someone who knows 2-3 at level 3+.

Impact

Interview performance looks strong (breadth creates confidence), delivery performance is weak (no domain is deep enough to handle complexity).

Over-indexing on certifications

Medium risk

When AWS, Azure, or Google certifications are used as a primary signal of technical depth rather than work evidence.

Impact

Hires who memorised for certifications but cannot apply knowledge. Certification ≠ production experience.


Next


What decisions this enables

  • Which technical level to require for a specific project role, based on work complexity not title
  • Whether to upskill internally or partner externally when a level gap exists
  • How to staff a team across complexity levels to balance cost and delivery risk
  • When a senior-titled engineer is actually operating at a lower technical level than the role demands
  • How long to budget for domain transfer when moving an engineer to an adjacent technology

FAQs