jumpstation

JumpStation Philosophy: The Arduino Moment for AI

What Arduino Did

In 2005, Arduino handed a $30 board to a design student and said: you can build hardware now. No electrical engineering degree. No proprietary toolchain. No minimum order quantity. Just a board, a USB cable, and a community.

What followed wasn’t just hobbyist tinkering. Arduino became the prototyping substrate for products that shipped in the millions — medical devices, industrial sensors, consumer electronics. The friction between “I have an idea” and “I have a working prototype” collapsed from months to hours.

AI is where hardware was in 2004. The gap between “I want to deploy an AI model” and “I have something running on real hardware” is enormous, expensive, and opaque. You either rent cloud compute you don’t own, buy enterprise hardware you can’t afford, or spend months on optimization work that has nothing to do with your actual problem.

JumpStation is the Arduino moment for AI.


The Problem with AI Infrastructure Today

Modern AI deployment has drifted toward two broken extremes:

Too big: Cloud APIs, GPU clusters, and enterprise accelerators. Powerful, but you don’t own the compute, you can’t run it offline, and the cost scales with every inference call. The model belongs to someone else.

Too hard: Running a model on embedded hardware — a microcontroller, a low-power edge device, a specific SoC — requires deep expertise in quantization, distillation, firmware toolchains, and hardware-specific optimization. This work is rarely the point. It’s just the tax.

The costs nobody names:

JumpStation’s answer is to build the tool that answers that question and closes the loop.


Right-Sized AI

Right-sized AI is the principle that an AI workload should run on exactly the hardware its requirements justify — no more, no less.

This is not a cost-cutting exercise. It is a design discipline:

Getting this right requires knowing the actual requirements — and today, nobody has a tool that measures them, maps them to hardware, and packages the result for deployment. JumpStation builds that tool.


How JumpStation Works

The JumpStation workflow has three phases:

1. Prototype on the JumpStation

The JumpStation is a WaveShare carrier board that accepts either a Raspberry Pi CM5 or a Rockchip RK3588S2 compute module. Both configurations expose the same 40-pin GPIO and run the same JumpStation stack. Connect the development board to the devices you will eventually deploy to — UNO Q, Pico, ESP32, classic UNO — and develop against real I/O in a full Linux environment.

This is the Arduino parallel: one board that speaks the language of all the boards below it. The module-swappable design adds a second dimension: native profiling on the same silicon family as your production deployment target.

2. Profile and target on the Turbo

The JumpStation Turbo adds the DX-M1 M.2 neural accelerator (25 TOPS INT8). The Turbo RK variant pairs the Rockchip RK3588S2’s on-chip 6 TOPS NPU with the DX-M1 for approximately 31 combined TOPS — the maximum distillation throughput available in the chassis.

The Turbo runs the JumpStation profiling and targeting suite: measuring your model’s computational graph, memory footprint, and latency requirements across the hardware catalog, then selecting the minimum viable device class. Baking real operational data into the distillation step (operational envelope injection) improves both the quality and the speed of compression.

The output is a target declaration: “this workload fits on a Pico” or “this workload requires at minimum a UNO Q” or “this workload needs a JumpModeler.”

3. Package and deploy as a JumpBundle

A JumpBundle is a portable, self-describing deployment artifact containing model weights (at the correct quantization level), targeting metadata, the runtime entry point, and a manifest that declares the minimum hardware required. Once built, a JumpBundle deploys to any compatible device in the catalog — from a Pico to a JumpModeler Turbo — with no further manual optimization. The hard work happened at compile time, on the Turbo.


The Hardware Spectrum

JumpStation targets a continuous spectrum of AI compute. User-facing applications scale from the Arduino UNO Q (minimum Linux/UI platform) to the JumpModeler Turbo (45 TOPS, 12-core ARM). Embedded inference targets extend further down to the ultra-constrained classic UNO.

The same JumpBundle format and toolchain addresses every row in this table.

Device On-chip AI RAM Role
Arduino UNO (ATmega328P) None 2 KB Ultra-constrained embedded inference
Raspberry Pi Pico None 264 KB Sensor inference, TFLite Micro
ESP32 None ~520 KB Wireless edge AI nodes
Arduino UNO Q Adreno + QRB2210 AI 4 GB Minimum Linux/UI platform, shield-compatible
JumpStation (CM5 / Pi5) CPU only 4–8 GB Dev host, GPIO testbed (Pi silicon)
JumpModeler Junior (RK3588S2) 6 TOPS NPU 4–8 GB Dev host, GPIO testbed (RK silicon)
JumpStation Turbo (CM5 + DX-M1) 25 TOPS 4–8 GB Profiling & distillation engine
JumpStation Turbo RK (RK3588S2 + DX-M1) ~31 TOPS 4–8 GB Max-throughput profiling & distillation
JumpModeler (Orion O6) 29 TOPS NPU 8 GB Production edge AI
JumpModeler Turbo (Orion O6 + DX-M1) 45 TOPS 16 GB High-performance production edge AI

The integral design means the ecosystem is closed end-to-end: you could conceivably write and deploy a JumpModeler Turbo application from an Arduino UNO Q, using the JumpStation toolchain as the bridge between them.


The Jump Worldview

1. Compute should be owned, not rented.

Every device in the JumpStation ecosystem runs inference locally. There is no cloud dependency for the application itself. You own the model, the hardware, and the data.

2. The minimum viable target is the correct target.

Running a model on more hardware than it needs is not a safety margin. It is waste — financial, thermal, and architectural. JumpStation makes finding the minimum viable target tractable.

3. The prototype and the deployment share a language.

The JumpStation GPIO testbed speaks the same protocol as the Pico and ESP32 it targets. The jump from “working on my development board” to “deployed on the field device” should be a package operation, not a rewrite.

4. Expertise is baked into the toolchain, not required of the user.

Quantization, distillation, and hardware-specific optimization are not user responsibilities. They are toolchain responsibilities. JumpStation’s suite handles them so the developer can stay focused on the application.

5. The scale ceiling is not fixed.

The same workflow that targets a Pico also targets an Orion O9. User-facing applications span from the Arduino UNO Q to the Orion O6 and O9. As the hardware catalog grows, the targeting suite grows with it. JumpStation is not a microcontroller platform with AI bolted on. It is an AI platform with microcontroller reach.

6. The ecosystem is integral.

Every device in the spectrum is a first-class citizen with the same toolchain, the same bundle format, and the same deployment workflow. A developer on an Arduino UNO Q can author, target, and deploy an application to an Orion O9 — and vice versa. The “Jump” in JumpStation is the leap from any device to any other device in the catalog, with the toolchain handling all the distance in between.


JumpStation Studio

Arduino App Lab (preloaded on the UNO Q) is the closest prior art: a Linux IDE that unifies Python and Arduino sketch development on a single board. JumpStation Studio replaces it with a platform-agnostic version that runs identically on every JumpStation Linux host.

The central abstraction is the DeviceBus — a single Python API for reading pins, driving actuators, and communicating over I2C/SPI/UART, regardless of whether the underlying transport is a GPIO breakout adapter (CM5, RK3588S2) or an STM32U585 MCU serial bridge (UNO Q). Application code is transport-agnostic. A project written on a CM5 runs unchanged on a UNO Q.

A Studio Project combines three optional layers:

When the project is ready, Studio hands it to the targeting suite and JumpBundle builder. The result is a .jbundle that deploys to any compatible device in the catalog — whether that is the development host itself (UNO Q shipped as the product) or a smaller target (Pico, ESP32) that the targeting suite identified as the minimum viable hardware.

See docs/studio.md for the full specification.


What JumpStation Is Not


Further Reading