Compliance

DO-178C Requirements Traceability: A Practical Guide for Engineers

DO-178C demands end-to-end bidirectional traceability from high-level requirements through code to test cases. This guide walks through what that means in practice, the most common pitfalls, and how modern tooling changes the game.

March 10, 20267 min read

What DO-178C Actually Requires

DO-178C (Software Considerations in Airborne Systems and Equipment Certification) is the primary international standard governing the development of safety-critical aviation software. When it comes to requirements, the standard is explicit: you must demonstrate bidirectional traceability from high-level requirements (HLRs) all the way through low-level requirements (LLRs), source code, and test cases — and back again.

Section 6 of the standard defines the software development processes, while Table A-7 lists the objectives for requirements traceability that apply at each Design Assurance Level (DAL). For DAL A software — the most safety-critical category — every single HLR must trace to at least one LLR, every LLR must trace to source code, and every test case must trace back to the requirement it verifies. Gaps are not acceptable.

The Bidirectional Link Model

Most engineers understand traceability in one direction: requirement R-001 is verified by test T-042. But DO-178C auditors also want to walk the other direction. Given test T-042, they should be able to identify exactly which requirement it covers. This bidirectionality serves two purposes:

  • Forward coverage: Every requirement has at least one corresponding test case — nothing is untested.
  • Backward coverage: Every test case corresponds to at least one requirement — there is no "test theatre" (tests that verify nothing in scope).

The link chain typically looks like this: System Requirement → HLR → LLR → Source Code Module → Test Case. Each link in that chain needs to be documented, reviewable, and verifiable by a DER (Designated Engineering Representative) or an ACO (Aircraft Certification Office) reviewer.

Traceability Levels and DAL Applicability

Not every project operates at DAL A. The standard scales its demands:

  • DAL A (Catastrophic failure): Full bidirectional HLR→LLR→code→test traceability required. Structural Coverage analysis (MC/DC) required.
  • DAL B (Hazardous failure): Same traceability structure; Decision Coverage replaces MC/DC.
  • DAL C (Major failure): HLR→LLR→test traceability; code traceability is less strictly audited.
  • DAL D (Minor failure): Traceability at the requirement level is still required, but the verification rigor decreases.

Understanding your DAL level before setting up your traceability structure is critical — it determines how granular your links need to be.

Common Pitfalls

1. Orphaned requirements. A requirement with no outbound trace links to either child requirements or test cases is a compliance gap. In audit terms, it means the system has a requirement that is not verified — immediately triggering a finding.

2. Untraceable test cases. Test cases created by developers that do not map back to any requirement create "noise" in the verification corpus. Auditors view these as wasted effort at best, and evidence of poor configuration control at worst.

3. Stale links after requirement changes. A common failure mode: a requirement is revised (title, description, acceptance criteria), but the test cases and downstream requirements are not updated. The links remain, but they now represent an outdated contract. This is precisely what "suspect links" are designed to catch — any time a meaningful field on a traced requirement changes, downstream links should be flagged for review.

4. Spreadsheet-based traceability matrices. Many teams still manage traceability in Excel. The problem is not that spreadsheets cannot store links — they can — it is that they offer no enforcement, no cycle detection, no automated suspect flagging, and no real-time collaboration. A missed cell update in column G of a 400-row matrix can introduce a compliance gap that costs weeks to find during an audit.

What Modern Tooling Changes

Purpose-built requirements traceability tools enforce the link model structurally rather than procedurally. The difference is significant in practice:

  • Structural enforcement: You cannot create a test-case link to a deleted requirement. The tool prevents the inconsistency at the point of mutation, rather than waiting for a quarterly review to catch it.
  • Automated suspect marking: When a requirement's title, description, status, or type changes, any outbound trace links are automatically flagged as "suspect" and assigned to the relevant engineer for re-review. The audit trail shows exactly when the flag was set and when it was cleared.
  • Bidirectional queries at runtime: Instead of manually cross-referencing tabs in a spreadsheet, an engineer can click on a requirement and instantly see all upstream requirements it satisfies, all downstream requirements it generates, and all test cases that verify it.
  • Coverage statistics: The coverage dashboard computes, in real time, the percentage of leaf-level requirements that have at least one verified link. This number is the one that matters to auditors, and it should always be visible — not something you compute manually the week before a certification review.

Practical Steps for Your Next Project

If you are setting up traceability for a new DO-178C project, here is a pragmatic starting point:

  1. Define your requirement hierarchy before writing a single requirement. Agree on the depth (system → HLR → LLR → code) and the identifier scheme (SYS-XXX, HLR-XXX, LLR-XXX).
  2. Import or create your system-level requirements first. These become the roots of your traceability tree.
  3. Decompose each system requirement into HLRs. Every HLR must trace back to at least one system requirement — this is your first traceability layer.
  4. Decompose HLRs into LLRs. At this stage, begin creating verification methods for each LLR.
  5. When test cases are created in your test management tool, link them to the corresponding LLRs. This closes the loop.
  6. Run a coverage report before every design review. Any requirement with 0% coverage is a finding waiting to happen.

Traceability is not a deliverable you produce at the end of a project — it is a discipline you practice throughout the entire development lifecycle. The teams that treat it as an ongoing activity rather than a pre-audit scramble consistently have smoother certification experiences.


Ready to modernize your requirements process?

Reqlume gives aerospace engineering teams bidirectional traceability, live coverage dashboards, and compliance-ready exports — without the complexity of legacy tools.