Skip to content

feat(format): tensor-layout-v1 12-gate PARTIAL discharge#1384

Closed
noahgift wants to merge 2 commits into
mainfrom
feat/tensorlayout-001-012-partial-discharge
Closed

feat(format): tensor-layout-v1 12-gate PARTIAL discharge#1384
noahgift wants to merge 2 commits into
mainfrom
feat/tensorlayout-001-012-partial-discharge

Conversation

@noahgift
Copy link
Copy Markdown
Contributor

@noahgift noahgift commented May 2, 2026

Summary

Binds the canonical layout source-of-truth contract (referenced from CLAUDE.md as LAYOUT-001/002) at PARTIAL_ALGORITHM_LEVEL via 12 verdict functions. Highest-leverage peripheral binding to date.

  • 41 unit tests including 7-bucket density sweep + 7-bucket GPU-parity sweep
  • Algorithm-level coverage advances by 12 gates; runtime ship % unchanged

Gates bound

Gate ID Rule
FALSIFY-001 Embedding density: ≤ 50% zeros
FALSIFY-002 Type-enforcement: AprTransformer uses ValidatedEmbedding
FALSIFY-003 NaN rejection: every weight is finite
FALSIFY-004 Spot check: ≤ 50% leading-row zeros (PMAT-234 catch)
FALSIFY-005 lm_head shape: [vocab, hidden] (not transposed)
FALSIFY-006 Cross-crate: aprender + realizar agree on validation result + msg
FALSIFY-007 Quant dispatch: zero _ => catchalls in WeightQuantType matches
FALSIFY-008 Wrong-kernel: Q6K via Q4K kernel → max_abs > 1e6 OR NaN
FALSIFY-009 Q4K roundtrip: ≥ 70% token-id match vs F32 inference
FALSIFY-010 Embedded tokenizer present in APR file
FALSIFY-011 APR GPU ≥ 80% of GGUF GPU throughput
FALSIFY-012 SafeTensors GPU ≥ 80% of GGUF GPU throughput

Pinned constants

  • AC_TL_MAX_ZERO_FRACTION = 0.50
  • AC_TL_SPOT_CHECK_MAX_ZERO_FRACTION = 0.50
  • AC_TL_GPU_PARITY_FLOOR = 0.80

Five Whys

See commit message — captures why 70% prefix-match for Q4K roundtrip, why zero catchalls for dispatch, and why max_abs > 1e6 OR NaN models "garbage."

Test plan

  • cargo test -p aprender-core --lib tensorlayout_001_012 — 41 passed
  • PMAT pre-commit gates green
  • CI green

🤖 Generated with Claude Code

Binds FALSIFY-001..012 from tensor-layout-v1 — the canonical
source-of-truth contract referenced from CLAUDE.md (LAYOUT-001/002).

- 001: ValidatedEmbedding rejected when > 50% zeros (density)
- 002: AprTransformer cannot bypass ValidatedEmbedding (Poka-Yoke)
- 003: ValidatedWeight rejects any NaN
- 004: 94.5% leading zeros rejected (PMAT-234 spot-check bug)
- 005: lm_head shape MUST be [vocab, hidden]
- 006: aprender + realizar agree on validation (cross-crate parity)
- 007: zero `_ =>` catchall in WeightQuantType dispatch
- 008: Q6K through Q4K kernel produces detectable garbage
- 009: SafeTensors→APR Q4K ≥ 70% token match vs F32 inference
- 010: APR has embedded BPE tokenizer (no sibling tokenizer.json)
- 011: APR GPU ≥ 80% of GGUF GPU throughput
- 012: SafeTensors GPU ≥ 80% of GGUF GPU throughput

## Five Whys

1. Why does tensor-layout-v1 list 12 falsification IDs without
   algorithm-level discharge? PMAT lints flagged FALSIFY-001..012 as
   unbound at PARTIAL_ALGORITHM_LEVEL.
2. Why is this the highest-leverage peripheral binding done so far?
   tensor-layout-v1 is referenced from CLAUDE.md as the canonical
   layout source-of-truth and from contracts/tensor-layout-v1.yaml
   §type_enforcement. Coverage drift here is felt across every
   load/inference path. 12 gates land in one PR.
3. Why a 70% prefix-match threshold for FALSIFY-009 (not 100%)?
   Q4K quantization introduces small but bounded drift; per the
   contract's "coherent output" predicate, exact token-id parity
   is too strict and would Fail every healthy roundtrip. 70% catches
   the regression class (garbage like "olumbia+lsi") while letting
   the drift-only deltas through.
4. Why a strict `_ => Fail` for FALSIFY-007 with zero tolerance?
   The contract is binary-by-design — even one catchall arm in
   WeightQuantType dispatch enables the silent wrong-kernel
   regression class (PMAT-232, ALG-006). Tolerance > 0 would let
   the regression slip back in via "we'll fix the catchall later."
5. Why model FALSIFY-008's "garbage" as `max_abs > 1e6 OR has_nan`?
   Healthy quantized matmul outputs are in roughly [-100, 100]
   range; a wrong-kernel dispatch that doesn't produce magnitude
   blow-up or NaN means the format-isolation invariant has failed
   — formats that LOOK compatible-enough to share a kernel are a
   silent-corruption class.

Adds 41 unit tests including a 7-bucket density sweep and a
7-bucket GPU-parity sweep. Realistic-healthy walks the canonical
Qwen2.5-Coder-1.5B / RTX 4090 healthy path; pre-fix walks 12
simultaneous regressions (PMAT-234 density, type bypass, NaN leak,
transposed lm_head, cross-crate divergence, catchall, format
collision, garbage roundtrip, missing tokenizer, GH-87/88 GPU
slowdowns).

No runtime % shift; algorithm-level coverage advances by 12 gates.
@noahgift noahgift force-pushed the feat/tensorlayout-001-012-partial-discharge branch from 5f72e79 to 9503c72 Compare May 11, 2026 15:35
@noahgift noahgift enabled auto-merge (squash) May 11, 2026 15:35
@noahgift
Copy link
Copy Markdown
Contributor Author

Superseded by #1637 (135-PR squash). The commit content is included verbatim in that PR's diff. Closing now to release runner slots; this PR would have auto-closed when #1637 merges.

@noahgift noahgift closed this May 12, 2026
auto-merge was automatically disabled May 12, 2026 09:21

Pull request was closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant