Skip to content

feat(format): kernel-launch-budget-v1 + ica-v1 7-gate PARTIAL discharge#1402

Closed
noahgift wants to merge 2 commits into
mainfrom
feat/kl-ica-001-007-partial-discharge
Closed

feat(format): kernel-launch-budget-v1 + ica-v1 7-gate PARTIAL discharge#1402
noahgift wants to merge 2 commits into
mainfrom
feat/kl-ica-001-007-partial-discharge

Conversation

@noahgift
Copy link
Copy Markdown
Contributor

@noahgift noahgift commented May 2, 2026

Summary

Bundles two sister contracts:

  • kernel-launch-budget-v1 (FALSIFY-KL-001..004): per-token formula, 12-kernel decomposition, monotonicity, SIMD≡scalar
  • ica-v1 (FALSIFY-ICA-001..003): output shape, deterministic, finite

28 unit tests including 5×5 monotonicity sweep + 4-bucket decomposition cases.
Algorithm-level coverage advances by 7 gates; runtime ship % unchanged.

Gates bound

Gate ID Rule
KL-001 observed launches == 12 * num_layers + final_kernels
KL-002 per-layer decomposition sums to exactly 12
KL-003 strictly monotone: la < lb → launches_a < launches_b
KL-004 SIMD budget == scalar budget
ICA-001 transformed shape == (n_samples, n_components)
ICA-002 transform(X) bit-deterministic
ICA-003 every output finite

Five Whys

See commit message — captures pinned 12 constant, bit-exact for ICA-002, and 5×5 monotonicity sweep rationale.

Test plan

  • cargo test -p aprender-core --lib kl_ica — 28 passed
  • PMAT pre-commit gates green
  • CI green

🤖 Generated with Claude Code

Bundles two sister contracts in one verdict module:

kernel-launch-budget-v1 (FALSIFY-KL-001..004):
- KL-001: per-token launches == 12 * num_layers + final_kernels
- KL-002: per-layer kernel decomposition sums to exactly 12
- KL-003: launches strictly monotone in num_layers
- KL-004: SIMD vs scalar budget calculation bit-equal

ica-v1 (FALSIFY-ICA-001..003):
- ICA-001: transformed shape == (n_samples, n_components)
- ICA-002: transform(X) bit-deterministic across calls
- ICA-003: every output finite for finite input

## Five Whys

1. Why bundle these two contracts? Both peripheral, span the
   GPU-launch-counting + ICA decomposition coverage band; one
   verdict module captures both without provenance pin overhead.
2. Why does this block ship? Coverage % cannot move while these
   peripheral contracts are unbound at PARTIAL_ALGORITHM_LEVEL.
3. Why pin `AC_KL_PER_LAYER_KERNELS = 12` as a const? The
   contract specifies "Component sum = 12" as a static assertion.
   A future refactor that fuses two kernels into one (sum=11) or
   adds an extra kernel (sum=13) must trip the gate at PR time
   — not silently change the launch budget downstream.
4. Why bit-exact (`to_bits()`) for ICA-002? The contract says
   "Transform deterministic" — ICA's whitening + projection is
   a fixed matrix multiply once fitted. Any drift between two
   `transform(X)` calls indicates a random-state leak (e.g., a
   stale RNG getting consumed for tie-breaking). Float-tolerant
   compare would mask that exact regression class.
5. Why a 5×5 monotonicity sweep for KL-003 (instead of just one
   pair)? The contract is "more layers → more launches" across
   the entire layer-count domain. Sweeping all (la, lb) pairs in
   {0, 1, 12, 24, 50} catches degenerate cases (la=0, la=lb,
   la>lb, la<lb) that a single point would miss.

Adds 28 unit tests including a 5×5 monotonicity sweep and 4-bucket
decomposition cases. Realistic-healthy walks the canonical 24-layer
Qwen + ICA fit; pre-fix walks 7 simultaneous regressions.

No runtime % shift; algorithm-level coverage advances by 7 gates.
@noahgift noahgift force-pushed the feat/kl-ica-001-007-partial-discharge branch from 170db99 to db23b5a Compare May 11, 2026 15:18
@noahgift noahgift enabled auto-merge (squash) May 11, 2026 15:18
@noahgift
Copy link
Copy Markdown
Contributor Author

Superseded by #1637 (135-PR squash). The commit content is included verbatim in that PR's diff. Closing now to release runner slots; this PR would have auto-closed when #1637 merges.

@noahgift noahgift closed this May 12, 2026
auto-merge was automatically disabled May 12, 2026 09:20

Pull request was closed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant