Layers Architecture¶
The prototype implements a clean separation of concerns across six layers, visually represented in the UI via a color-coded layer bar.
graph TD
A[🧠 ML Browser Benchmark] --> B[Shell / Router]
B --> C[Tutorial Engine]
B --> D[Playground Engine]
C --> E[Block Runtime Layer]
D --> E
E --> F[ML Execution Layer]
F --> G[Benchmarking Layer]
G --> H[📊 Results Dashboard]
style A fill:#6366f1,color:#fff
style B fill:#3b82f6,color:#fff
style C fill:#22c55e,color:#fff
style D fill:#eab308,color:#000
style E fill:#8b5cf6,color:#fff
style F fill:#ef4444,color:#fff
style G fill:#ec4899,color:#fff
style H fill:#6366f1,color:#fff
Layer breakdown¶
| Layer | Color | Key Files | Responsibility |
|---|---|---|---|
| Shell / Router | Blue | shell.js, router.js, app.js |
Hash-based SPA routing, role (learner/educator) toggling, light/dark theme, mobile nav drawer |
| Tutorial Engine | Green | tutorial.js |
5-step guided walkthrough: what is classification → dataset → model → runtime → run & see results |
| Playground Engine | Yellow | playground.js |
Free-form pipeline configuration, single-run + "benchmark all combinations" mode |
| Block Runtime | Purple | block-runtime.js |
BlockRegistry (dataset/model/runtime/backend definitions), DataFlowGraph (state graph with validation), BlockCanvas (pipeline UI with click-to-configure) |
| ML Execution | Red | ml-adapter.js |
AdapterFactory with per-backend caching, 4 adapter classes (TFJS, ONNX, Transformers, MediaPipe), ImageNet preprocessing, shared runBenchmark() orchestrator |
| Benchmarking | Pink | bench.js |
MetricsCollector (performance.now() timing, Chromium performance.memory delta), RunRecorder (localStorage persistence with UUID), DiffComparator (comparison table with per-column best-value highlighting) |
Data flow¶
sequenceDiagram
participant U as User
participant R as Router
participant P as Page Fragment
participant E as Engine
participant B as Block Runtime
participant M as ML Adapter
participant C as Benchmarking
U->>R: Click nav link (#playground)
R->>P: fetch(pages/playground.html)
P-->>R: HTML fragment
R->>E: dynamic import(playground.js)
E->>B: new DataFlowGraph() + BlockCanvas()
U->>B: Click blocks (dataset → model → runtime → backend)
B->>E: block-ready event
U->>E: Click "Run"
E->>M: runBenchmark(config, nRuns)
M->>M: adapter.load() → download model
M->>M: adapter.infer() → cold + warm runs
M->>C: MetricsCollector + RunRecorder.save()
C-->>U: Result card with prediction + metrics
Module dependency graph¶
app.js
├── shell.js → utils.js
└── router.js → utils.js, shell.js
Lazy imports:
tutorial.js → utils.js, ml-adapter.js → bench.js
playground.js → utils.js, block-runtime.js, ml-adapter.js, shell.js
bench.js → utils.js
Key design decisions¶
- Zero build step — all ES modules loaded natively by the browser; no bundler, no TypeScript, no framework
- Lazy loading — pages AND CDN libraries (TF.js, ONNX, etc.) loaded on-demand via dynamic
import()/ script injection - Event-driven —
block-change,block-ready,role-changeCustomEvents decouple UI from state - Dual roles — Learner (simplified tutorial) vs Educator (N-runs slider, raw logs, backend selector)
- localStorage as DB — run records persisted as JSON array with UUIDs, CSV export available
- CSS custom properties — design system with per-layer colors, light/dark theme via
data-themeattribute