v0.1 release notes — what shipped in walkindb's first public release
The API, three clients, SECURITY.md rollout, and real benchmark numbers from the €6 VPS.
Posted 2026-04-11 · 10 minute read
walkindb v0.1 is live at https://api.walkindb.com. This is the "first thing you can curl from an HN comment" release. It is deliberately not labeled v1.0 because the SECURITY.md rollout-state table still has items marked 🟡 and ⏳, and this post is honest about which ones.
If you just want the demo, the live SQL playground at walkindb.com will do. The rest of this post is what's under it.
What's in the release
The API
One HTTP endpoint. POST /sql. Send it {"sql": "..."}, get back JSON with columns, rows, rows_affected, truncated. The first call returns a session token in the X-Walkin-Session header; include it on subsequent calls to reach the same walk-in. Ten minutes after creation, walkindb deletes the file.
Two supporting endpoints: GET /healthz (liveness) and GET /openapi.json (full OpenAPI 3.1 spec — LLM-discoverable). That's the entire API surface.
Three clients
pip install walkindb— Python SDK, stdlib-only (usesurllib, notrequests/httpx), so it installs inside any agent sandbox.Client,Result,WalkinDBError. Apache-2.0.npm install walkindb— JavaScript / TypeScript SDK, ESM only, TypeScript types shipped in the package. Works on Node 18+, Bun, Deno, Cloudflare Workers, and any browser withfetch. Zero dependencies.npx walkindb-mcp— MCP server that addswalkindb_executeandwalkindb_resettools to Claude Desktop, Claude Code, Cursor, Zed, and Continue. One line in your client config.
All three are version 0.1.0, Apache-2.0, and were published the same day as the first API deployment.
Security layers that actually shipped
The full SECURITY.md model has eight layers. v0.1 ships six of them, partially or fully:
| Layer | Status in v0.1 | Notes |
|---|---|---|
| 1. SQLite compile-time hardening | ⏳ Roadmap | Blocked on switching off modernc.org/sqlite; see deferred list below |
2. sqlite3_set_authorizer callback | ⏳ Roadmap (but unblocked) | Prototype landing in v0.2 via a trampoline over modernc's lib package — no cgo, no fork. See launch/research-modernc-authorizer.md. |
3. Per-connection sqlite3_limit | ✅ Live | All ten limits applied per request via modernc.org/sqlite.Limit() on a dedicated sql.Conn. LIMIT_ATTACHED=0 is the quiet star here. |
| 4. Hard 2 s query timeout | ✅ Live | context.WithTimeout per request; 408 on interrupt |
| 5. Per-instance 10 MB cap | ✅ Live | PRAGMA max_page_count = 2560; 507 on full |
| 6a. Landlock filesystem jail | ✅ Live | internal/sandbox/sandbox_linux.go confines the process to /var/walkindb/** + /tmp at kernel level |
| 6b. seccomp-bpf allowlist | ✅ Live | systemd SystemCallFilter=@system-service with explicit denies |
| 6c. Separate executor process + cgroups | ⏳ Roadmap | Single-process today |
| 7. Rate and quota limits | 🟡 Partial | 60 req/min + 10 new-instance creates/min per IP enforced; per-IP concurrent-instance cap and per-instance query cap still TODO |
| 8. Session token discipline | ✅ Live | HMAC-SHA256 over UUIDv7 + 32-byte nonce, wkn_ prefix, daily secret rotation with current+previous acceptance for one cycle, 404 on any failure (no enumeration oracle), tokens never logged |
Plus an extra: an application-level SQL keyword blocklist (internal/executor/blocklist.go) that rejects ATTACH, DETACH, load_extension, readfile, writefile, fts5_*, sqlite_dbpage, zipfile, unzip, edit before the statement ever reaches SQLite. Comment-aware (strips /* */ and -- before matching), case-insensitive. This covers what Layer 2 would cover, minus the dangerous-PRAGMA denial. Once v0.2's authorizer lands, the blocklist stays as defense-in-depth.
What's deferred on purpose
- Layer 1 — SQLite compile-time hardening (
SQLITE_OMIT_LOAD_EXTENSION,SQLITE_OMIT_ATTACH,DQS=0). Requires either a fork of modernc or switching to mattn/go-sqlite3 with cgo. Decision memo atlaunch/research-modernc-authorizer.mdin the repo. - Layer 2 —
sqlite3_set_authorizercallback. Not blocked on the driver — the research memo found the C function is already compiled into modernc'slibpackage and reachable via a trampoline. Landing in v0.2. - Layer 6c — separate executor process + per-instance cgroups. Nice-to-have; would tighten memory/pid caps per query. Deferred because the single-process design is simpler and the current systemd hardening is already enough for the Show HN bar.
- Layer 7 residuals — per-IP concurrent-instance cap, per-instance query cap, global circuit breaker. Easier once we have real production telemetry to calibrate against.
None of these are shipping tomorrow, and the landing page's security paragraph says so in the same language. That honesty matters more than the missing checks.
Measurements from the VPS
walkindb runs on a single OVHcloud VPS (€6/month, 2 vCPU, 8 GB RAM) in Gravelines, France. These numbers are measured on that box with the production binary, running a second walkindb instance at 127.0.0.1:9090 with rate limits cranked high, benchmarked by a Go harness that lives in bench/ in the repo. Source is open, reproduce at will.
Memory footprint
| State | Resident (RSS) |
|---|---|
| Production binary at rest, no traffic | 11 MB |
| Benchmark binary fresh-started, no walk-ins | ~15 MB |
| Benchmark binary with 5 000 walk-ins resident | ~19 MB |
| Benchmark binary with 12 000 walk-ins resident | ~19 MB (flat) |
Marginal memory per walk-in: effectively zero. The share-nothing architecture means idle walk-ins hold nothing in process memory — the directory, meta.json, and db.sqlite live on disk and are read on demand.
Latency (measured locally, loopback)
| Operation | p50 | p90 | p99 | p99.9 |
|---|---|---|---|---|
GET /healthz | 390 µs | 776 µs | 7.3 ms | 17.1 ms |
POST /sql SELECT 1 (reused session) | 3.0 ms | 10.1 ms | 21.3 ms | 29.7 ms |
POST /sql (fresh walk-in each call) | 14.3 ms | 29.0 ms | 47.6 ms | 68.4 ms |
These are the server's own latency, measured over loopback to exclude the internet. End-to-end from a laptop in Portugal over HTTPS to api.walkindb.com in Gravelines adds ~150–240 ms of TLS + routing.
Throughput (concurrent requests, reused session)
| Concurrency | Total | RPS | p50 | p99 |
|---|---|---|---|---|
| 1 | 3 814 | 191 | 3.2 ms | 22.9 ms |
| 8 | 12 529 | 626 | 9.7 ms | 49.2 ms |
| 32 | 18 690 | 933 | 28.7 ms | 115.4 ms |
| 128 | 18 791 | 934 | 125.3 ms | 384.6 ms |
Saturates at ~930 rps around c=32. c=128 gets the same rate but much worse tail latency — the two vCPUs are at the edge.
Walk-in creation rate and capacity
Single-threaded walk-in creation: 59 per second. Concurrent with 8 creators: 138 per second sustained. The test provisioned 100 000 walk-ins back-to-back in 727 seconds without a single failure. Disk usage at that scale: 1.4 GB (~14 KB per walk-in). Memory at that scale: essentially unchanged — idle walk-ins cost no RAM.
Box-level state after 100 000 resident walk-ins:
- 7.1 GB of 8 GB RAM still free
- 68 GB of 72 GB disk still free
- Zero failures, zero retries, zero rate-limit hits
Projected disk ceiling given the 14 KB-per-walk-in figure: ~4.8 million walk-ins before the disk fills. Long before that, the answer is to shard to a second box — not to rewrite the architecture. The scaling post has the full methodology.
What changed in the binary between yesterday and today
A short changelog for the people who read changelogs:
internal/executor: addedsqlite3_limitsuite (all 10 caps), SQL keyword blocklist, and multi-statement SQL dispatch soCREATE TABLE t; INSERT t; SELECT * FROM treturns rows instead ofrows_affected. Shipped after a smoke test against the live playground caught the bug.internal/session: daily HMAC secret rotation with current+previous acceptance.internal/sandbox: new package applying Landlock on Linux, no-op on other OSes. Called frommain.goafter secret load and before the listener opens.internal/ratelimit: per-IP token buckets (separate budgets for requests vs new-instance creates), in-memory LRU capped at 10 000 IPs.internal/access: structured JSONL access log with exactly the 7 fields the privacy notice commits to — timestamp, source_ip, instance_id, http_method, http_status, sql_byte_length, user_agent. Never the SQL text. Daily rotation + 7-day sweep. Schema-lock unit test fires if anyone tries to add a field.cmd/walkindb: env-var knobs for rate-limit tuning (WALKINDB_RATE_LIMIT_REQS,WALKINDB_RATE_LIMIT_NEW_INSTANCES), secret rotation goroutine, sweeper goroutine.- systemd unit: tightened with
SystemCallFilter=@system-serviceplus denies for@mount @swap @reboot @raw-io @cpu-emulation @debug @obsolete @privileged @resources, plus all the standardProtect*andRestrict*directives. bench/: new Go benchmark harness that produced all the numbers in this post. Runs against any walkindb instance you point it at.
Test suite now at ~70 tests across 9 internal packages, all green. The load-bearing one is TestEntryFieldShapeIsExhaustive in internal/access — it fails if anyone adds a logged field without updating legal/privacy.md first.
What's next
In rough priority order:
- Email aliases at Cloudflare Email Routing —
abuse@,dmca@,security@,privacy@,legal@,hello@. The privacy notice commits to acting onabuse@within 72 hours; that promise is only truthful once the mailbox exists. - DMCA designated agent registration at copyright.gov ($6, 10 minutes). US Section 512(c) safe harbor is not fully claimable until this is done.
- Layer 2 (authorizer callback) via the trampoline path documented in
launch/research-modernc-authorizer.md. A prototype may already exist ininternal/authorizer/by the time you read this. - Layer 1 (SQLite compile-time hardening). Likely route: stay on
modernc.org/sqlitefor v0.x, evaluate a mattn-based build for v1.0. - Layer 6c — separate executor process per query, with cgroups for memory and pid limits. Bigger refactor; probably v0.3.
- Persistent tier — still intentionally absent. v0.1 has no paid tier of any kind. If there's demand after Show HN, expect an out-of-band email-based offering.
- Regions — one VPS in Gravelines is enough for v0. Multi-region is a day-after problem.
- Observability — Prometheus metrics on
:9091, or a/metricsendpoint. The 7-day access logs are fine for abuse response; operational metrics need their own feed.
Trying it
# First call — provisions a walk-in, returns the session token in the header curl -i -X POST https://api.walkindb.com/sql \ -H "content-type: application/json" \ -d '{"sql":"CREATE TABLE notes(id INTEGER PRIMARY KEY, body TEXT); INSERT INTO notes(body) VALUES(\"hello\"); SELECT * FROM notes"}'
That's the entire onboarding. No signup, no API key, no credit card.
Python: pip install walkindb. JavaScript: npm install walkindb. Claude Desktop / Cursor / Zed: add walkindb-mcp to your MCP config per walkindb.com/docs/mcp.
Full docs: walkindb.com/docs. Bug bounty: [email protected] — we pay proportional to impact.
v0.1 is the first release. It is not the last. Thanks for reading.
Also see
- How walkindb holds 100 000 concurrent walk-ins on a single €6 VPS — full benchmark methodology and the share-nothing architecture post
- Security model — the defense-in-depth layers referenced above
- walkindb on GitHub — source for the server, clients, MCP wrapper, docs, legal, and benchmarks
- walkindb.com — home page + live SQL playground