jumpstation

JumpStation Architecture

Overview

JumpStation is an AI deployment ecosystem organized as two interlocking stacks: a development and targeting stack that runs on the JumpStation hardware and a runtime stack that runs on any device in the catalog from Pico to Orion.

The central insight is that the two stacks share a common artifact — the JumpBundle — which is produced by the development stack and consumed by the runtime stack. Everything in the development stack exists to make that artifact as small, correct, and efficient as possible for its target.

┌──────────────────────────────────────────────────────────┐
│               Development & Targeting Stack              │
│  (runs on JumpStation / JumpStation Turbo)               │
├──────────────────────────────────────────────────────────┤
│  JumpStation Studio │  Unified IDE and project runner    │
│  DeviceBus          │  GPIO / serial bridge abstraction  │
│  Profiler           │  Measure model FLOPs, memory, lat. │
│  Target Selector    │  Map requirements → hardware class  │
│  Distillation       │  Quantize / prune / distill model   │
│  JumpBundle Builder │  Package artifact + manifest        │
└────────────────────────────┬─────────────────────────────┘
                             │  .jbundle
┌────────────────────────────▼─────────────────────────────┐
│                  Runtime Stack                           │
│  (runs on any target: Pico → Orion O9)                   │
├──────────────────────────────────────────────────────────┤
│  JumpBundle Runtime │  Validate, mount, launch bundle     │
│  Inference Engine   │  Target-appropriate inference layer │
│  Launcher / UI      │  Boot experience (Linux targets)    │
│  Firmware / OS      │  MicroPython / Linux / bare-metal   │
└──────────────────────────────────────────────────────────┘

Development & Targeting Stack

This stack runs on the JumpStation or JumpStation Turbo. Its job is to take an AI model, understand its requirements, and produce an optimized JumpBundle for a specific hardware target.

1. Profiler (core/targeting/profiler.py)

The profiler measures an AI model’s actual computational requirements against the JumpStation hardware:

The profiler encodes these measurements as a requirement vector that feeds directly into the target selector.

2. Target Selector (core/targeting/target_selector.py)

The target selector maps a requirement vector to the minimum viable hardware class in the device catalog. It answers the question: what is the smallest device this model can run on at acceptable quality?

The selector consults each device’s profile.json — which includes compute class, available RAM, accelerator TOPS, and supported inference frameworks — and finds the lowest-cost match.

On the Turbo, the DX-M1 (25 TOPS INT8) accelerates the targeting computation itself, allowing the selector to simulate quantization error and latency across the full hardware catalog in seconds rather than hours.

3. Distillation Pipeline (core/distillation/)

Once a target is selected, the distillation pipeline compresses the model to fit:

The Turbo’s DX-M1 makes this fast: baking the model’s operational envelope (inputs, outputs, typical data distribution) into the distillation process produces better-compressed models in fewer iterations.

4. JumpBundle Builder (core/jumpbundle/builder.py)

Packages the optimized model weights, runtime entry point, and targeting metadata into a schema-validated .jbundle archive. The manifest declares the minimum hardware class required to run the bundle so the runtime can enforce compatibility at install time.


Runtime Stack

This stack runs on the deployment target — any device from Pico to Orion O9.

5. Firmware / OS

Device class Runtime environment
Pico / UNO MicroPython or bare-metal C; TFLite Micro
ESP32 MicroPython; TFLite Micro or ESP-IDF inference
JumpStation (CM5) Linux; ONNX Runtime, TFLite, PyTorch Mobile
JumpStation Turbo Linux + DX-M1 SDK; accelerated INT8 inference
Orion O9 Linux; 12-core CPU, 45 TOPS NPU

6. Inference Engine

The inference engine is a thin, target-aware adapter layer that selects the appropriate inference backend for the running device. A JumpBundle declares its required backend in the manifest; the engine validates and invokes it.

Supported backends (planned): TFLite, TFLite Micro, ONNX Runtime, DX-M1 SDK, Orion NPU SDK.

7. JumpBundle Runtime

JumpBundle is the universal deployment artifact. Every AI application in the JumpStation ecosystem is distributed as a JumpBundle.

The runtime is responsible for:

See jumpbundle.md for the full format specification.

8. Launcher / UI

On Linux-capable devices (JumpStation, Turbo, Orion), the launcher is the persistent boot shell — displaying installed JumpBundles, managing transitions via the Shrink animation system, and surfacing system UI. Constrained targets (Pico, UNO) have no launcher; they boot directly into the application.


Core Services

Service Location Purpose
Profiler core/targeting/profiler.py Measure model compute/memory requirements
Target Selector core/targeting/target_selector.py Map requirements to minimum hardware class
Distiller core/distillation/distiller.py Knowledge distillation pipeline
Quantizer core/distillation/quantizer.py INT8/INT4 weight quantization
JumpBundle Builder core/jumpbundle/builder.py Package bundles from source
Silhouette core/silhouette/ Image preprocessing for UI assets
Device Flashing core/flashing/ Firmware deployment for Pico, ESP32, UNO

Data Flow: From Model to Deployed Bundle

Developer brings trained AI model to JumpStation
        │
        ▼
Profiler measures FLOPs, RAM, latency on GPIO testbed
        │
        ▼
Target Selector maps requirement vector → minimum hardware class
        │
        ▼
Distillation pipeline quantizes + compresses for target
        │  (DX-M1 accelerates this on Turbo)
        ▼
JumpBundle Builder packages model + manifest + assets → .jbundle
        │
        ▼
Flasher deploys .jbundle to target device
        │
        ▼
Runtime validates manifest against device profile
        │
        ▼
Inference engine initialized for target backend
        │
        ▼
Application runs on target hardware

Data Flow: Launching a Bundle (Linux targets)

User selects bundle in Launcher
        │
        ▼
Runtime validates bundle against schema + device profile
        │
        ▼
Shrink animation plays; inference engine initialized
        │
        ▼
Bundle entry point executes
        │
        ▼
App runs; Launcher listens for exit signal
        │
        ▼
Expand animation plays, Launcher resumes

Repository Map

core/
├── jumpbundle/     # Schema + builder
├── studio/         # JumpStation Studio: DeviceBus, project model, runner
├── targeting/      # Profiler + target selector
├── distillation/   # Quantizer + distillation pipeline
├── silhouette/     # Image processing pipeline
├── flashing/       # Firmware deployment (Pico, ESP32, UNO)
└── ui/             # Launcher and animation primitives

devices/
├── jumpstation/    # CM5 / Pi5 base device profile
├── jumpstation_rk/ # RK3588S2 base device profile
├── turbo/          # CM5 + DX-M1 profile
├── turbo_rk/       # RK3588S2 + DX-M1 profile
├── orion_o6/       # Orion O6 (8-core) profile
├── orion/          # Orion O9 (12-core, 45 TOPS) profile
├── uno_q/          # Arduino UNO Q (QRB2210 + STM32U585)
├── pico/           # RP2040 firmware configuration
├── esp32/          # ESP32 configuration
├── uno/            # Arduino UNO ATmega328P configuration
├── kidputer/       # Kidputer profile
└── jumpcade/       # Jumpcade profile

tools/
├── dataset_tools.py    # Bulk asset preparation utilities
└── power_monitor.py    # Power draw instrumentation

Further Reading