Security audits
Independent review is the standard.
AI is the standing review layer.
Asylia has not completed an external human audit yet. Until that bar is cleared, every Bitcoin-critical change is reviewed with a recurring AI-assisted process across GPT, Claude, and Gemini, then kept open for human inspection.
v0.1 · 2026-05-01 · May 1, 2026
01 · Status
No human audit has been completed yet.
That is a security fact, not a footnote.
Asylia is early, and the honest state is simple: no external human security firm has completed a report on the product. We will not use the word audited as a shortcut until that work exists, is reviewed, and can be linked publicly.
The current control is a standing AI-assisted review program focused on the narrow Bitcoin surface: descriptors, script construction, PSBT assembly, hardware signing adapters, signature checks, and broadcast behavior. AI review does not certify the system. It raises questions, finds edge cases, and forces repeatable evidence before code ships.
AI review
Recurring
Run against material security changes and release candidates.
Model quorum
3 families
GPT, Claude, and Gemini are used as separate review lenses.
02 · Method
Three independent model passes.
The goal is adversarial coverage, not a single model verdict.
Each review round gives the same bounded scope to the current top model from GPT, Claude, and Gemini. Findings are compared for overlap and disagreement. Repeated findings become engineering work; single-model findings are investigated, reproduced, or dismissed with notes. The process is strongest when the models disagree, because disagreement exposes assumptions that a normal checklist can miss.
- 01
Scope freeze
Define the exact package, route, adapter, or transaction path under review.
- 02
Model review
Run GPT, Claude, and Gemini independently with the same threat model and code context.
- 03
Triage
Group findings by severity, reproducibility, affected funds surface, and user impact.
- 04
Remediation
Patch confirmed issues, add tests where the failure mode is stable, and record residual risk.
03 · Coverage
The current AI review bench.
Model families are listed by role, not treated as certification authorities.
| Model family | Review role | Current focus | Report status | Cadence |
|---|---|---|---|---|
| GPT | Systems reviewer | Cross-file invariants, data-flow assumptions, and missing negative tests. | Internal notes only | Every audit round |
| Claude | Adversarial reviewer | Threat-model gaps, ambiguous wallet policy handling, and failure-mode reasoning. | Internal notes only | Every audit round |
| Gemini | Breadth reviewer | Large-context comparison across adapters, docs, changelog, and product claims. | Internal notes only | Every audit round |
GPT
Systems reviewer
Cross-file invariants, data-flow assumptions, and missing negative tests.
Internal notes only
Claude
Adversarial reviewer
Threat-model gaps, ambiguous wallet policy handling, and failure-mode reasoning.
Internal notes only
Gemini
Breadth reviewer
Large-context comparison across adapters, docs, changelog, and product claims.
Internal notes only
04 · Scope
The first audit boundary is Bitcoin-critical.
Product chrome can wait. Funds-facing primitives cannot.
The intended first external audit scope is the code that can affect wallet correctness, signing intent, recovery, or broadcast safety.
- btc-core: descriptor parsing, address derivation, PSBT assembly, finalization, and signature checks.
- blockchain-data-btc: chain reads, UTXO interpretation, fee inputs, and broadcast boundaries.
- hw-trezor and hw-ledger: public-root export, policy registration, device signing, and returned signature handling.
- shared-types: domain contracts that keep wallet policy, signer metadata, and transaction state consistent across apps.
05 · Limits
AI review is not a substitute for human accountability.
It is useful precisely because its limits are explicit.
AI-assisted review can widen coverage quickly, but it cannot replace a qualified human audit, live exploit validation, vendor firmware review, or formal assurance. Findings are treated as leads until they are reproduced against the code, covered by tests, or closed with a concrete rationale.
- No AI model can certify that funds are safe.
- No model output is accepted without engineering review.
- No unresolved critical finding is hidden behind marketing language.
06 · Publication
Reports will be linked when they are real.
Security claims should be inspectable by readers, not implied by tone.
When an external human audit is commissioned and completed, this page will link the report, the scope, the commit range, confirmed findings, remediation work, and any accepted residual risk. Until then, Asylia will describe the current process plainly: recurring AI-assisted review, open Bitcoin primitives, and responsible disclosure.
Want to review the surface yourself?
Start with the narrow policy and the MIT-licensed packages. The smaller the wallet surface stays, the easier it is for outside reviewers to challenge it.