Technical Depth Scale

Five-level scale measuring technical mastery from foundational awareness to expert-level standard-setting within a domain.

10 min read

Executive summary

  • Technical depth measures how well someone executes work in a specific domain — from basic awareness to expert mastery
  • This is a 0-4 scale that is domain-specific: someone can be level 3 in Python and level 1 in Go
  • High technical depth (3-4) reduces rework, improves quality, and enables complex problem-solving
  • Most delivery failures happen when complexity exceeds technical depth (e.g., assigning level 2 work to level 1 talent)
  • Use work samples and technical exercises, not years of experience, to assess depth

Definitions

Technical Depth: Mastery of core technical skills within a specific domain, including foundational concepts, advanced techniques, tool proficiency, and best practices application.

What it includes: Domain-specific knowledge, tool proficiency, best practices understanding, debugging capability, code quality awareness.

What it does NOT include: Business context awareness, problem ownership (agency), communication skills, or management ability.

Key distinction: Technical depth is domain-specific — a cloud architect might be level 4 in AWS but level 1 in cybersecurity. Always specify the domain when assessing.


Why this matters

Business impact

High technical depth (3-4):

  • Reduces rework — gets it right the first time, fewer bugs
  • Enables complex work — can handle ambiguous, multi-system problems
  • Improves delivery speed — knows patterns, doesn't need constant research
  • Reduces technical debt — writes maintainable, scalable code

Low technical depth (0-1) assigned to complex work:

  • Creates delivery risk — high error rate, requires extensive review
  • Slows projects — frequent blockers, needs hand-holding
  • Accumulates debt — quick hacks instead of proper solutions
  • Frustrates teams — senior engineers spend time fixing issues
  • Creates — work complexity exceeds team competency, forcing expensive external help

Cost reality

A level 3 engineer delivers 3-5x the value of a level 1 engineer in the same domain, even if the salary is only 1.5-2x higher.

Why: The level 3 produces working, maintainable solutions. The level 1 produces code that requires senior review, rework, and often rewrites — consuming team capacity.


The Scale (0-4)


How it works

The technical progression

Technical depth develops through cycles of learning, practice, and feedback:

Key mechanism: Pattern recognition + judgment

Technical depth combines:

  1. Foundational knowledge — understanding core concepts and principles
  2. Pattern recognition — knowing solutions to common problems
  3. Tooling proficiency — effective use of frameworks, libraries, and systems
  4. Judgment — when to apply standard patterns vs. custom solutions

Example: A level 2 Python engineer knows how to write a web API. A level 3 engineer knows when to use async vs. sync, how to structure for scale, and what trade-offs exist. A level 4 engineer defines the API framework the team uses.


Example: CaseCo Mid

json
{
  "canonical_block": "role_profile",
  "version": "1.0.0",
  "case_ref": "caseco.mid.v1",
  "updated_date": "2026-02-16",

  "scenario_title": "Cloud Infrastructure Roles by Technical Depth",
  "scenario_description": "CaseCo Mid's Cloud & Infrastructure practice (120 people) needs to staff a multi-cloud enterprise migration project",

  "roles_by_technical_depth": [
    {
      "role_title": "Junior Cloud Engineer",
      "technical_depth_required": "1-2",
      "typical_tasks": [
        "Configure EC2 instances following runbooks",
        "Set up VPCs using Terraform templates",
        "Monitor dashboards and create tickets for anomalies",
        "Update security group rules based on requests"
      ],
      "why_this_level": "Tasks are well-defined with clear acceptance criteria. Errors caught in code review.",
      "supervision_needed": "Weekly 1:1s, daily standups, code review on all changes"
    },
    {
      "role_title": "Cloud Engineer",
      "technical_depth_required": "2-3",
      "typical_tasks": [
        "Design and implement VPC architectures for new applications",
        "Troubleshoot networking issues across multiple AWS accounts",
        "Build CI/CD pipelines using GitHub Actions + Terraform",
        "Implement monitoring and alerting for production services"
      ],
      "why_this_level": "Tasks require independent problem-solving and trade-off decisions. Light supervision.",
      "supervision_needed": "Bi-weekly 1:1s, architecture review for major changes"
    },
    {
      "role_title": "Senior Cloud Architect",
      "technical_depth_required": "3-4",
      "typical_tasks": [
        "Design multi-cloud architectures for enterprise clients",
        "Make build vs. buy decisions for infrastructure components",
        "Lead technical discovery for $5M+ engagements",
        "Define cloud standards and best practices for practice",
        "Mentor cloud engineers and review complex designs"
      ],
      "why_this_level": "Ambiguous problems requiring strategic judgment. Sets direction for others.",
      "supervision_needed": "Monthly 1:1s, peer review on strategic decisions"
    }
  ],

  "staffing_scenario": {
    "project": "Fortune 500 multi-cloud migration (AWS + Azure)",
    "complexity": "High (legacy systems, compliance requirements, 50+ applications)",
    "team_composition": {
      "required": [
        "1 Senior Cloud Architect (Level 3-4 AWS + Azure)",
        "2 Cloud Engineers (Level 2-3 AWS)",
        "2 Cloud Engineers (Level 2-3 Azure)",
        "3 Junior Cloud Engineers (Level 1-2, mixed AWS/Azure)"
      ],
      "rationale": "Senior architect handles ambiguity and strategic decisions. Level 2-3 engineers execute designs independently. Junior engineers handle runbook-driven tasks."
    },
    "common_mistake": "Assigning level 1-2 engineers to architecture decisions. Result: multiple rework cycles, client escalations, margin erosion."
  }
}

What this example shows

  • Level 1-2 handles structured, low-risk work with supervision
  • Level 2-3 operates independently on standard infrastructure tasks
  • Level 3-4 handles ambiguity and makes strategic trade-offs

Key insight: Match technical depth to work complexity. Overstaffing (level 4 doing level 2 work) wastes money. Understaffing (level 1 doing level 3 work) creates delivery risk.


Action: Technical Assessment Framework

Use this framework for interviews and work sample reviews:

Level 1-2 Assessment (Foundational/Proficient)

Coding exercise (60-90 minutes):

Task: Build a REST API endpoint that accepts a JSON payload, validates it,
stores data in a database, and returns a confirmation.

Requirements:
- Use [team's language/framework]
- Include input validation
- Handle error cases
- Write basic tests

Evaluation:
- Does it work? (Y/N)
- Code quality: readable, follows conventions?
- Error handling: edge cases covered?
- Tests: meaningful coverage?

Scoring:

  • Level 1: Works with prompting, quality issues, incomplete tests
  • Level 2: Works independently, clean code, good tests

Level 2-3 Assessment (Proficient/Expert)

Architecture exercise (90 minutes):

Scenario: Design a real-time notification system for a mobile app with
100K daily active users.

Requirements:
- Users receive push notifications for specific events
- Must handle notification delivery failures
- Should be cost-effective at scale
- Discuss technology choices and trade-offs

Evaluation:
- Architecture: sensible technology choices?
- Trade-offs: identifies pros/cons of approaches?
- Scale: considers costs, failure modes, monitoring?
- Communication: explains decisions clearly?

Scoring:

  • Level 2: Functional design but missing scale/cost considerations
  • Level 3: Comprehensive design with trade-off analysis

Level 3-4 Assessment (Expert/Architect)

Portfolio review + strategic discussion:

Review:
1. Show me a complex system you designed
2. What were the key technical decisions?
3. What trade-offs did you make and why?
4. What would you do differently today?

Strategic questions:
1. How do you evaluate new technologies?
2. Describe a time you prevented a major technical problem
3. How do you balance speed vs. quality?
4. How do you mentor less experienced engineers?

Scoring:

  • Level 3: Deep technical knowledge, solid judgment, mentors effectively
  • Level 4: Strategic vision, sets standards, recognized authority

Pitfalls

Pitfall 1: Confusing years of experience with technical depth

Early warning: 10-year engineer struggles with intermediate problems; claims expertise but delivers level 1-2 work.

Why this happens: Years measure time, not learning. Someone can repeat the same year of experience 10 times.

Fix: Assess work samples and problem-solving, not resume tenure. Ask: "What's the most complex problem you've solved in this domain?"


Pitfall 2: Domain transfer assumptions

Early warning: Strong Python engineer struggles with Golang; assumes skills transfer 1:1.

Why this happens: Language syntax transfers, but idioms and ecosystems don't. A level 3 in one language is often level 1-2 in a new one initially.

Fix: Assess depth within the specific domain needed. Budget 3-6 months for strong engineers to reach proficiency in adjacent domains.


Pitfall 3: Confusing breadth with depth

Early warning: Engineer knows 10 technologies at level 1 but none at level 3. Appears impressive in interviews, struggles in delivery.

Why this happens: Surface knowledge is easier to acquire and signal. Depth requires sustained practice.

Fix: Probe depth in 1-2 domains, not breadth across many. Ask: "Walk me through your most complex [X] project. What challenges did you face? How did you solve them?"


Pitfall 4: Over-indexing on certifications

Early warning: Candidate has AWS certifications but cannot design a basic VPC architecture.

Why this happens: Certifications test knowledge, not application. Memorization ≠ mastery.

Fix: Use certifications as a signal of interest, not capability. Always validate with work samples or technical exercises.


Next


FAQs

Q: How do I know if someone is level 2 vs. level 3?

A: Level 2 solves standard problems independently. Level 3 solves novel, ambiguous problems and makes architectural decisions. Test by giving an ambiguous requirement and seeing if they ask good clarifying questions and propose sensible trade-offs.


Q: Can someone be level 4 in multiple domains?

A: Rare but possible. Most level 4 engineers specialize deeply in 1-2 domains and are level 2-3 in adjacent areas. Breadth at level 4 is very expensive and takes 10+ years to develop.


Q: Should I hire level 3-4 for all roles?

A: No. Match depth to work complexity. Junior/mid roles often need level 2 (proficient independent execution). Senior roles need level 3-4 (complex problem-solving, architectural decisions). Overhiring wastes budget.


Q: How long does it take to go from level 1 to level 3?

A: Depends on domain and learning velocity:

  • Fast learner with good mentorship: 2-3 years
  • Average pace: 3-5 years
  • Slow or self-taught: 5-7 years

Some people plateau at level 2 and need to switch domains or roles to continue growing.


Q: What if I need level 3-4 depth but can only find level 2?

A: Three options:

  1. Upskill internally — invest in training and mentorship (6-18 months)
  2. Partner — engage external experts for complex work, build internal capacity over time
  3. Reduce scope — simplify the problem to match available depth (often underrated)