bountynet-genesis.exe // operational briefing

Trust is a hardware constraint.

BountyNet Genesis binds code identity to silicon through a hash the CPU vendor signs. Modify the code — the measurement changes. Re-attest — the signature still verifies, but the claim no longer matches what anyone expected. The proof survives; the lie doesn't.

Proven on real silicon across three independent TEE vendors on 2026-04-14 (Intel TDX, AMD SEV-SNP, AWS Nitro). The runner that built this binary is itself an attested build of this repo, and the endpoint serving this page's attestation is a live TDX enclave you can query right now — no account, no secrets, no cloud-provider middle-man.

verify it live source deploy
operational_status.log
metric status specification
platform coverage PROVEN Intel TDX + AMD SEV-SNP + AWS Nitro — live hardware, real quotes captured
attestation chain LIVE stage 0 → stage 1 with report_data binding, recursive chain walker
vendor root trust PINNED Intel SGX Root CA + AMD ARK + AWS Nitro Root CA
self-build loop CLOSED CI builds bountynet on its own self-hosted TDX runner
live endpoint SERVING https://34.45.143.81/ — attested-TLS, GCP c3-standard-4
regression gate 65/65 7 tests run real hardware bytes through verify_platform_quote on every commit
[ live_remote_attestation_module ]

Do not take our word for it. The module below reproduces Value X — the application-identity half of LATTE's two-layer check — in this browser session, against any commit of the repo. Click the button; the widget walks v2/ via the GitHub tree API, hashes every file with native crypto.subtle.digest('SHA-384'), and produces the same 48-byte result bountynet build produces inside the TDX runner.

Important distinction. This widget verifies Value X — the application identity. It does not verify the silicon signature chain against the vendor root CA (that requires ECDSA verification we haven't yet ported to WASM). For the full platform check, run cargo test --test hardware_regression locally, or bountynet check against the live endpoint — both described in section 2 below.

live lineage // archive & current head

archive // commit 2593db6 (ouroboros run) 58b663bbb60a906f29a2e5141c67a4a163271a9af24304e282bdbbbb8fb94fa5bc337dda323d37e7f78f429bcf80810c
live // commit 8dd008e (current main) f44f87e6f0965493ac6822dd89b155cb82758d6898751196f7892d12684fc49594fa3a8d7dc548d8496fe17597a32c20

Both values are real. The archive is the Value X the ouroboros CI run produced at commit 2593db6, signed by Intel's TDX module and committed byte-identically at v2/testdata/chain/tdx_ouroboros.cbor. The live value is what the endpoint at https://34.45.143.81/ serves right now — different bytes because the tree advanced when tdx_ouroboros.cbor was added to v2/testdata/chain/. Same chain, same pinned root CA, two different commits. The widget reproduces either one; the live endpoint serves the current one.

compute it in your browser

status: idle // 0 files

full verification // 60 seconds locally

This is the flow that closes both halves of LATTE — application identity AND platform signature. Clones the repo, loads real attestation bytes from v2/testdata/chain/, and runs them through verify_platform_quote which checks the binding AND walks the signature chain to the pinned vendor root CA (Intel / AMD / AWS) for each platform. No TEE required on your machine.

$ git clone https://github.com/maceip/bountynet-genesis
$ cd bountynet-genesis/v2
$ cargo test --test hardware_regression

running 7 tests
test snp_stage0_verifies                              ... ok
test all_three_platforms_share_no_cross_contamination ... ok
test tdx_stage0_verifies                              ... ok
test snp_stage1_verifies_and_chains_to_stage0         ... ok
test tdx_stage1_verifies_and_chains_to_stage0         ... ok
test nitro_stage0_verifies                            ... ok
test ouroboros_attestation_verifies                   ... ok

test result: ok. 7 passed; 0 failed

against the live endpoint

A self-hosted GCP TDX runner is serving an attested-TLS endpoint right now. On every bountynet check invocation, the client pulls the leaf cert, extracts the EAT CBOR from the X.509 extension at OID 2.23.133.5.4.9, verifies the TLS channel binding, verifies the stage 1 TDX quote against Intel's pinned root, and walks previous_attestation back to stage 0. No CA in the trust chain — Intel's silicon is the root.

$ bountynet check https://34.45.143.81/

[bountynet] === attested-TLS check ===
[bountynet] Target: 34.45.143.81:443
[bountynet] Leaf cert: 17284 bytes DER
[bountynet] EAT extension: 16896 bytes
[bountynet] EAT profile: https://bountynet.dev/eat/v2
[bountynet] Platform:    Some(Tdx)
[bountynet] Value X:     f44f87e6f0965493ac6822dd89b155cb82758d6898751196f7892d12684fc49594fa3a8d7dc548d8496fe17597a32c20
[bountynet] SPKI binding:    PASS
[bountynet] Quote binding:   PASS
[bountynet] Quote signature: PASS
[bountynet]   MRTD: 8370d8f6d02f2d13e211e91c93fde923049522b241425a29a7bf0071ef49b250af4ef49d852fa3e10065d1b51dfce8fb
[bountynet] Chain step 1: verifying previous stage (8436 bytes EAT)
[bountynet]   ✓ step 1 quote verifies (Value X stable)
[bountynet] Chain:           PASS (2 stage(s) walked)
[bountynet] === Check Complete ===
[bountynet] 34.45.143.81 is a genuine Tdx TEE running Value X f44f87e6f0965493
the problem // software-only security is a fallacy

In a standard cloud environment, the host OS is omnipotent. Root on the hypervisor can scrape VM memory, intercept syscalls, modify binaries, and impersonate services. Normally you solve this by trusting the cloud provider. Genesis removes the human element from the trust equation: the silicon itself produces a signed measurement of every byte it's running, and a verifier can check that signature against a root CA the CPU vendor published years ago.

threat model

attack vector genesis countermeasure result
memory scraping hypervisor reads VM memory SNP/TDX encrypt memory; Nitro has no shared memory at all BLOCKED
attestation key theft silicon extraction keys held in PSP / TDX Module / NSM — not exportable BLOCKED
pre-boot code injection swap binary before load measurement covers boot image; verifier policy rejects unknown REJECTED
runtime source modification attacker rewrites files on disk ratchet locks CT before build; stage 1 re-verifies at boot REJECTED
forged attestation relay serve real quote from a different host TLS key hash is in report_data — channel binding fails REJECTED
side-channel leakage cache timing / Spectre-class not mitigated — ongoing research, vendor microcode updates only NOT COVERED
denial of service hypervisor stops or throttles the VM not mitigated — TEE guarantees integrity, not availability NOT COVERED
buggy enclave code the attested code itself leaks secrets not mitigated — TEE measures the code, can't fix it NOT COVERED
single-vendor compromise Intel or AMD or AWS issues forged attestations anytrust: build the same source on 2+ vendors, cross-witness MITIGATED (not eliminated)

The last four rows are the honest limits. Genesis is not a magic shield; it is a narrow, provable claim about what code is running on which hardware. Everything else — DoS resilience, side-channel hardening, bug-free enclave code, multi-vendor trust — is architectural work on top.

[ protocol_architecture: the chain ]

The system is self-verifying from the first instruction. Every stage's claim is signed by the CPU vendor before the next stage starts.

I. Stage 0 — Attested build. bountynet build runs inside the TEE. It locks the source tree (the ratchet, from Attestable Containers), runs the build command, hashes the output (A), computes Value X (sha384 of the frozen source), and collects a hardware quote whose report_data[0..32] binds all of the above. The quote is signed by the silicon.

II. Handover — Disk, not keys. The stage 0 attestation is written to disk alongside the artifact. There are no keys to hand over — the signing happened inside the TEE and the signature is portable.

III. Stage 1 — Attested runtime. bountynet run boots inside a TEE, loads the stage 0 attestation from disk, verifies it against the pinned vendor root, re-computes Value X from the on-disk source, confirms the match, generates its own TLS keypair, and collects a new hardware quote binding sha256(tls_spki) and sha256(stage0_attestation) into report_data. The new quote is served in an X.509 cert extension over TLS — attested-TLS.

IV. Continuous verification. Any client running bountynet check pulls the leaf cert, extracts the EAT, verifies the signature chain, checks the channel binding, walks previous_attestation back to stage 0, and confirms Value X is stable across the chain. Every link is a hash in a hardware-signed report_data. No gaps.

sequenceDiagram participant Dev as developer participant Git as git repo participant S0 as stage 0 TEE
(bountynet build) participant Disk as disk participant S1 as stage 1 TEE
(bountynet run) participant Cli as verifier
(bountynet check) Dev->>Git: push source Git->>S0: clone inside TEE S0->>S0: CT = sha384(source) S0->>S0: ratchet: freeze source read-only S0->>S0: build + A = sha384(artifact) S0->>S0: Value X = sha384(frozen source) S0->>S0: collect quote, report_data = binding S0->>Disk: attestation.cbor Note over S1: boot inside TEE Disk->>S1: load stage 0 EAT S1->>S1: verify stage 0 quote vs vendor root S1->>S1: recompute Value X from disk S1->>S1: set_previous(stage0_cbor) S1->>S1: collect new quote S1->>Cli: TLS handshake
(cert carries stage 1 EAT) Cli->>Cli: extract EAT from cert extension Cli->>Cli: check TLS SPKI == eat.tls_spki_hash Cli->>Cli: verify stage 1 quote vs Intel root Cli->>Cli: walk previous_attestation Cli->>Cli: verify stage 0 quote vs Intel root Cli->>Cli: assert Value X stable across chain Cli-->>Dev: green / red
[ real_world: kms unwrap gated by PCR0 ]

The most concrete reason to care about attested runners: you can gate a secret on what code is about to see it. Not "what machine," not "which IAM principal," not "what network" — what code, hashed by the CPU before anything runs.

AWS KMS supports this directly for Nitro Enclaves via the kms:RecipientAttestation:PCR0 condition key. A Decrypt request sent from an enclave whose boot image doesn't match PCR0 gets refused at the KMS side. Bountynet's Nitro code path already generates an RSA keypair inside the enclave, embeds its public key in the NSM attestation document, and routes CiphertextForRecipient through the enclave for unwrapping. The flow is round-tripped in v2/tests/eat_kms_e2e.rs.

sequenceDiagram participant App as your app participant Enc as nitro enclave
(bountynet run) participant KMS as AWS KMS App->>Enc: please unwrap <blob> Enc->>Enc: collect NSM attestation
(RSA pubkey in user_data) Enc->>KMS: Decrypt + Recipient=attestation_doc KMS->>KMS: check kms:RecipientAttestation:PCR0
against policy KMS-->>Enc: CiphertextForRecipient
(encrypted to RSA pubkey) Enc->>Enc: unwrap with enclave-held RSA priv key Enc-->>App: plaintext
(inside the attested process)

What KMS actually supports as RecipientAttestation conditions: PCR0, PCR1, PCR2, PCR8, plus image/module digests. There is no kms:RecipientAttestation:ValueX — PCR0 is the only hardware-signed identity KMS checks. So the story is two-layered:

  1. KMS gates on PCR0, which covers the bountynet boot image. "The enclave is running bountynet."
  2. Bountynet's EAT carries Value X inside the attested channel. Once KMS releases the wrapped secret, the enclave decrypts it and verifies its own Value X (LATTE's ratchet check). The attested code then decides whether to hand the plaintext to the application based on an arbitrary policy over Value X, previous_attestation, source_hash, and artifact_hash — any claim in the EAT.

That split is the right shape. KMS is a coarse, global allowlist keyed on the boot image. Value X is a fine-grained, project-local policy enforced by the attested code after unwrapping. Trying to put Value X into the KMS condition key would couple your identity to AWS's condition language and require a KMS policy update on every rebuild — which defeats the point.

TDX and SNP need a user-supplied policy gate (Vault plugin, OPA sidecar, custom service) because they don't have an equivalent of Nitro's NSM-backed KMS recipient flow. That's follow-up work tracked under bountynet-gate in DESIGN.md.

[ deploy_protocol // two paths ]

There are two ways to use Genesis in your own project. They solve different problems and most people want the first one.

Path A // consume the composite Action

Public, consumable from any GitHub repository. Drop it into your workflow, point it at your source, and it will produce an attestation.cbor as a workflow artifact on every push.

# .github/workflows/attested-build.yml
jobs:
  attested-build:
    runs-on: [self-hosted, tdx, bountynet]      # or your own TEE labels
    steps:
      - uses: actions/checkout@v4
      - uses: maceip/bountynet-genesis/v2/action@main
        with:
          source: v2
          cmd: cargo build --release

Honest caveat. The Action only produces a real attestation if the job executes on a TEE-capable host. GitHub-hosted ubuntu-latest runners have no TDX. You need a self-hosted runner registered against your repo and running inside a TEE — that's path B.

Path B // deploy your own attested runner

Spin up a GCP TDX VM (cheapest TEE option), install the GitHub Actions runner agent, tag it [self-hosted, tdx, bountynet], and you're done. Our own self-hosted runner that produces the ouroboros attestation runs with this exact setup.

$ export GITHUB_TOKEN=ghp_your_runner_registration_token
$ export GITHUB_REPO=you/your-repo
$ ./deploy/gcp-tdx.sh

# provisions a c3-standard-4 with TDX enabled, installs Docker, copies
# the bountynet binary, registers the instance as a self-hosted runner
# against the repo you specify. idle cost: ~$0.17/hour.

Why we don't let you share our runner. A shared self-hosted runner would let anyone who can queue a job execute arbitrary code inside our TEE. That voids the attestation guarantee for everyone using the same runner. Bring your own TEE host — we deliberately scope our runner to a single repository.

Follow-up: shadow attestation service — a narrow, single-purpose endpoint that accepts a source tarball and returns a bountynet-signed shadow EAT, zero TEE required on the client side. Tracked, not yet built. See DESIGN.md.

[ genesis_constitution // operational mandates ]

Six rules, derived verbatim from v2/CONSTITUTION.md. Every code change in this repo is checked against them. If a feature doesn't serve one of these, it doesn't get built.

  1. hardware before software Every trust claim is rooted in a silicon-level signature from the CPU vendor. No intermediaries, no bearer tokens, no cloud-provider promises. If the vendor root CA doesn't sign it, we don't trust it.
  2. the chain is unbroken Source → attested build → artifact → attested runtime. Every link is a hash in a hardware-signed report_data. If any link is missing, the result is unverified, not true.
  3. value X is the identity One sha384 represents "this exact software." Reproduced across platforms, computed inside TEE hardware, verifiable by anyone with the source. No external database required, no signing authority required, no registry required for the cryptographic proof.
  4. the TEE is the witness, not the oracle We do not require reproducible builds. The TEE attests that source S became artifact A inside environment E. Anyone can re-run the build in their own TEE and cross-witness for anytrust. This is the Attestable Containers contribution.
  5. no shortcuts in verification If a signature can't be checked, the result is "unverified." No "probably." No "the cloud provider promised." No "we checked last Tuesday." No insecure mode without making it loudly visible.
  6. no plumbing before the core proof works end-to-end No token formats, no smart contracts, no compatibility layers until the three checks (platform measurement, Value X, pubkey binding) pass on hardware. Everything not strictly load-bearing is a follow-up.
[ system_specifications_v2 // audit drawer ]

Raw logic and schemas. Collapsed by default — click a section to expand. Each section corresponds to a file in the repo (linked at the bottom).

invariant.md // the three checks

Everything in this repo exists to solve one problem. It breaks down into exactly three checks. If any one fails, the attestation is worthless.

flowchart TB Q[hardware quote
signed by CPU vendor] P[1. platform measurement
matches expected value
MRTD / MEASUREMENT / PCR0] X[2. Value X
sha384 of runner source
matches expected value] K[3. pubkey in quote
generated inside the TEE
sha256 tls_spki in report_data] OK[attestation is real] Q --> P Q --> X Q --> K P --> OK X --> OK K --> OK

Check #1 proves the boot image is genuine (the shim is in the boot image, the CPU measured it). Check #2 proves the application payload is genuine (Value X computed by the shim, trusted because of #1). Check #3 proves the attestation was produced by the attested environment, not a proxy — channel binding.

EAT token schema

IETF RATS Entity Attestation Token (RFC 9711), profile https://bountynet.dev/eat/v2. CBOR-encoded, embedded in an X.509 extension at OID 2.23.133.5.4.9 (TCG DICE Conceptual Message Wrapper — same convention Gramine uses).

EatToken { version: u32 // schema version eat_profile: string // "https://bountynet.dev/eat/v2" value_x: [u8; 48] // sha384 of runner source — LATTE L2 platform: u8 // 1 Nitro / 2 SevSnp / 3 Tdx platform_measurement: Vec<u8> // PCR0 / MEASUREMENT / MRTD platform_quote: Vec<u8> // raw TEE evidence, opaque leaf tls_spki_hash: [u8; 32] // sha256(TLS SPKI) — channel binding source_hash: [u8; 48] // CT — Attestable Containers artifact_hash: [u8; 48] // A — Attestable Containers iat: u64 // issued at, unix seconds eat_nonce: [u8; 32] // freshness previous_attestation: Vec<u8> // prior stage CBOR bytes } binding_bytes() = sha256( version || profile || value_x || platform || tls_spki_hash || source_hash || artifact_hash || iat || eat_nonce || previous_hash() ) previous_hash() = sha256(previous_attestation) // or zeros for root // binding_bytes() goes into report_data[0..32] of the TEE quote. // The TEE hardware signs report_data. No separate app-level key.
trust anchors // three independent vendors

Anytrust (Attestable Containers contribution #4): if two independent TEE vendors attest the same Value X from the same source, trust at least one → trust the build. Our code verifies against the vendor root CA for each platform. The roots are pinned in v2/src/quote/verify.rs.

flowchart TB Src[source tree
Value X = sha384 of files] Src --> TDX[Intel TDX] Src --> SNP[AMD SEV-SNP] Src --> NIT[AWS Nitro] TDX --> TDXQ[TD quote + MRTD] SNP --> SNPQ[attestation report + MEASUREMENT] NIT --> NITQ[COSE Sign1 doc + PCR0] TDXQ --> IntelCA[Intel SGX Root CA] SNPQ --> AMDARK[AMD Root Key ARK] NITQ --> NitroCA[AWS Nitro Root CA] IntelCA --> Any[anyone can verify
no bountynet server needed] AMDARK --> Any NitroCA --> Any
ouroboros.sh // CI builds itself

Every push to main that touches v2/ triggers .github/workflows/attested-self-build.yml, which dispatches to our self-hosted TDX runner. The runner checks out the commit, runs bountynet build v2/ inside the TEE, and uploads a real Intel-TDX-signed attestation as a workflow artifact. The first run (2026-04-14 12:40 UTC, commit 2593db6) is archived byte-identically at v2/testdata/chain/tdx_ouroboros.cbor.

flowchart LR Push[developer push] GH[github.com] Runner[self-hosted TDX runner] Build[sudo bountynet build v2/] TDX[TDX module
signs quote] Archive[testdata/chain/tdx_ouroboros.cbor] Push --> GH GH --> Runner Runner --> Build Build --> TDX TDX --> Archive Archive -.->|next release
chains to this| Build
standards.txt
what we buildderived from
two-layer check (platform + portable identity) LATTE (Xu et al., SJTU, EuroS&P 2025)
ratchet + build-inside-TEE + (PCR, CT, A) binding Attestable Containers (Cambridge/JKU, CCS 2025)
build-to-runtime chain (AC contribution #6) left to consumer; we implement it
EAT token format IETF RATS RFC 9711
X.509 extension delivery TCG DICE Attestation Architecture v1.1 (OID 2.23.133.5.4.9)
bootstrap-once, then cheap signatures Flashbots Andromeda / SIRRAH
readme.txt

bountynet-genesis is MIT-licensed. Constitution at v2/CONSTITUTION.md. Architectural memory at v2/DESIGN.md. Hardware runbook at v2/HARDWARE_VALIDATION.md.

// built inside a TEE that built itself