Web: Streamlit Exports — vLLM, llama.cpp, MLX
The Planner page's bottom row turns any computed plan into a deploy-ready runtime config. One click per backend — vLLM, llama.cpp, or MLX — emits the matching CLI / JSON artefact.
What you'll see
Narrative beats:
- Planner page ready with a plan already computed; we scroll down to the Export Configuration row.
- Export row in view: three buttons side-by-side.
Export as vLLMclicked — JSON payload with--model,--max-model-len,--max-num-seqsrendered in a code block.Export as llama.cppclicked — CLI arg string (-m,-c,-ngl) for the same plan.Export as MLXclicked — Apple Silicon deploy config serialised as JSON.
All three emit from the same hwledger-ffi export functions that power the CLI's hwledger export subcommand, so the configs are identical regardless of client.
Journey not yet recorded.
Run the journey recorder to capture interactions:
./apps/macos/HwLedgerUITests/scripts/run-journeys.shReproduce
bash
cd apps/streamlit/journeys
bun install
bash scripts/record-all.sh
STREAMLIT_URL=http://127.0.0.1:8599 bunx playwright test specs/exports.spec.ts