Code Quality Assessment
Evaluating software quality, maintainability, and technical debt
Code quality assessment is the foundation of technical due diligence for software-intensive acquisitions. It reveals the true health of the software assets you're acquiring—and often uncovers surprises that materially impact deal value.
What is Code Quality?
Code quality isn't a single metric—it's a multi-dimensional assessment of how well software is built and how easy it will be to maintain and extend. In M&A contexts, poor code quality translates directly to:
- Higher maintenance costs post-acquisition
- Slower feature development velocity
- Increased defect rates and customer impact
- Greater difficulty attracting and retaining engineering talent
- Higher risk of security vulnerabilities
Code Quality Dimensions
| Dimension | What It Means | Why It Matters in M&A |
|---|---|---|
| Readability | Can developers understand the code easily? | Affects onboarding time and maintenance cost |
| Maintainability | Can changes be made safely and efficiently? | Determines ongoing development velocity |
| Reliability | Does the code work correctly under all conditions? | Impacts customer satisfaction and support costs |
| Performance | Does the code execute efficiently? | Affects infrastructure costs and user experience |
| Security | Is the code free from vulnerabilities? | Determines breach risk and compliance status |
| Testability | Can the code be tested effectively? | Enables safe changes and continuous deployment |
Assessment Methods
1. Static Analysis (Automated)
Automated tools analyze code without executing it, providing objective, repeatable metrics:
Common Tools:
| Tool | Primary Focus | Best For |
|---|---|---|
| SonarQube/SonarCloud | Overall code quality, technical debt | Comprehensive analysis, multi-language |
| CodeClimate | Maintainability, test coverage | Quick setup, good for smaller codebases |
| Snyk | Security vulnerabilities, dependencies | Open source security, license compliance |
| Checkmarx | Security (SAST) | Enterprise security scanning |
| ESLint/Pylint/RuboCop | Language-specific linting | Coding standards, style issues |
Limitations: Static analysis can't assess business logic correctness, architectural decisions, or code that "works but shouldn't." It needs human interpretation.
2. Manual Code Review (Expert-Driven)
Experienced engineers review code samples to assess aspects tools can't measure:
- Architecture patterns: Are design patterns applied appropriately?
- Business logic: Does the code correctly implement requirements?
- Critical paths: Are high-risk areas (payments, authentication, data processing) well-implemented?
- Code organization: Is the codebase logically structured?
- Naming and conventions: Is the code self-documenting?
Sampling Strategy: With limited time, focus manual review on:
- Core business logic (revenue-generating features)
- Security-critical components (auth, payments, data access)
- High-churn files (frequently modified = potentially problematic)
- Recently added code (indicates current practices)
- Legacy code (oldest files, often most problematic)
3. Repository Analysis (Historical)
Examining version control history reveals development patterns:
| Metric | What It Reveals | Red Flag Threshold |
|---|---|---|
| Commit frequency | Development activity and rhythm | <10 commits/week for active development |
| Author distribution | Bus factor, knowledge concentration | >50% of commits from single author |
| File churn | Problematic areas, frequent changes | Files changed >20 times/month |
| Commit size | Review practices, change management | Average >500 lines/commit |
| Branch strategy | Development maturity | Direct commits to main/master |
Key Metrics and Benchmarks
Industry Benchmarks
| Metric | Good | Acceptable | Concerning | Critical |
|---|---|---|---|---|
| Test Coverage | >80% | 60-80% | 40-60% | <40% |
| Code Duplication | <3% | 3-5% | 5-10% | >10% |
| Cyclomatic Complexity (avg) | <10 | 10-15 | 15-25 | >25 |
| Technical Debt Ratio | <5% | 5-10% | 10-20% | >20% |
| Dependency Age (avg) | <1 year | 1-2 years | 2-4 years | >4 years |
| Security Vulnerabilities | 0 critical | 0 critical, <5 high | <3 critical | 3+ critical |
Calculating Financial Impact
Convert metrics to dollars:
Test Coverage Gap:
- Current: 35% → Target: 70%
- Estimated effort: 800 hours to add tests
- Cost: 800 × $150/hour = $120,000
Technical Debt (SonarQube):
- Reported: 180 person-days
- Cost: 180 × $1,200/day = $216,000
Dependency Updates:
- 15 outdated packages, 3 with critical vulnerabilities
- Estimated: 2 weeks engineering time
- Cost: $24,000
Critical Red Flags
These findings should trigger immediate concern and deeper investigation:
| Red Flag | Risk | Typical Remediation Cost |
|---|---|---|
| Test coverage below 30% | High defect rates, fear of changes | $100K - $500K |
| No automated testing pipeline | Manual QA bottleneck, slow releases | $50K - $150K |
| Hard-coded credentials/secrets | Security breach risk | $25K - $100K + breach risk |
| Single developer owns >60% of code | Bus factor risk, knowledge loss | $100K - $300K (retention/transfer) |
| No code review process | Quality control gap | $25K - $50K (process implementation) |
| Framework/language end-of-life | Security risk, talent scarcity | $500K - $5M (migration) |
| Circular dependencies | Architectural problems | $200K - $1M (refactoring) |
What "Good" Looks Like
Signs of a healthy codebase:
- Consistent style: Code looks like it was written by one team, not random individuals
- Clear structure: Easy to find where functionality lives
- Meaningful names: Variables, functions, and classes describe their purpose
- Appropriate abstractions: DRY without over-engineering
- Comprehensive tests: Critical paths well-covered, tests run automatically
- Current dependencies: Libraries regularly updated, no known vulnerabilities
- Documentation: README, architecture docs, API documentation exist and are current
- Clean git history: Meaningful commit messages, PRs reviewed, branches merged cleanly
The Single Point of Failure: When One Developer Owns Everything
A strategic acquirer was purchasing a fintech startup for $45M. Code analysis revealed something concerning: one developer had authored 72% of the codebase, including 100% of the payment processing logic and database layer.
During management interviews, we learned this developer had already given notice—he was joining a competitor in 6 weeks. There was no documentation for the core systems, and the remaining three developers had never touched the payment code.
The acquirer faced a choice: walk away, or negotiate. They chose to renegotiate with a $3M price reduction, a $2M retention bonus for the departing developer (contingent on 6-month knowledge transfer), and a 12-month consulting agreement.