🧬 Run a Cellule.ai worker

Turn your idle PC into a node of the distributed LLM network. One-liner install on Linux/macOS, one download on Windows.

Why ? Without workers, there is no network. Every CPU or GPU joining brings more capacity, lower latency, better model coverage. Your machine earns community credits ($IAMINE) proportional to the compute it actually delivers.

Two ways to contribute

🧬 Worker (default)

Zero config. The worker auto-detects your hardware (CPU / GPU / RAM), picks the right GGUF model from the Cellule catalog (Qwen 2B / 4B / 30B MoE), downloads it, and joins the network. Best for anyone who just wants to contribute idle compute without touching an LLM stack.

python -m iamine worker --auto
🔗 Proxy (advanced)

Bring your own LLM. If you already run llama-server, Ollama, LocalAI, vLLM... the proxy forwards inference to YOUR endpoint instead of downloading a new model. You control which model runs, on which port, with which quantization. Read the full guide before launching.

python -m iamine proxy --upstream http://localhost:8000

→ Full proxy guide

The OS guides below cover the Worker (default) flow. If you want the proxy mode, same OS tabs apply for installing iamine-ai; only the final launch command changes.

Pick your OS

🐧 Linux One-line install 🍎 macOS One-line install 🪟 Windows .exe or pip

Linux

Quickest: one command

curl -sSL https://cellule.ai/install-worker.sh | bash

The installer detects your distro, checks Python 3.12+, creates a user-local venv at ~/.cellule-worker, installs iamine-ai, and sets up a systemd --user service so the worker survives reboots.

No sudo needed if Python 3.12+ is already on your machine. The installer runs entirely in your home directory. It will print the package-manager command to install Python if missing.

What the installer does

  1. Detects OS + architecture
  2. Verifies Python 3.12+ (suggests apt/dnf/pacman command if missing)
  3. Creates ~/.cellule-worker venv
  4. Installs iamine-ai from https://cellule.ai/pypi
  5. Writes ~/.config/systemd/user/cellule-worker.service
  6. Enables linger + starts the service

Manual install (if you prefer)

python3.12 -m venv ~/.cellule-worker
source ~/.cellule-worker/bin/activate
pip install --upgrade pip
pip install iamine-ai -i https://cellule.ai/pypi --extra-index-url https://pypi.org/simple
python -m iamine worker --auto

Manage the service

# Status
systemctl --user status cellule-worker

# Live logs
journalctl --user -u cellule-worker -f

# Restart
systemctl --user restart cellule-worker

# Stop + disable
systemctl --user disable --now cellule-worker

GPU acceleration (NVIDIA CUDA, AMD ROCm)

The default install ships CPU-only llama-cpp-python. For GPU, rebuild with the right flags:

# NVIDIA CUDA
CMAKE_ARGS="-DGGML_CUDA=on" ~/.cellule-worker/bin/pip install --force-reinstall --no-binary=:all: llama-cpp-python

# AMD ROCm
CMAKE_ARGS="-DGGML_HIPBLAS=on" ~/.cellule-worker/bin/pip install --force-reinstall --no-binary=:all: llama-cpp-python

macOS

Quickest: one command

curl -sSL https://cellule.ai/install-worker.sh | bash

Same installer as Linux — detects macOS, creates ~/.cellule-worker, and installs a launchd agent at ~/Library/LaunchAgents/ai.cellule.worker.plist so the worker auto-starts at login.

Apple Silicon (M1/M2/M3/M4) users get Metal GPU acceleration automatically on the default install — llama-cpp-python ships with Metal support on macOS.

Requirements

Manual install

brew install [email protected]
python3.12 -m venv ~/.cellule-worker
~/.cellule-worker/bin/pip install --upgrade pip
~/.cellule-worker/bin/pip install iamine-ai -i https://cellule.ai/pypi --extra-index-url https://pypi.org/simple
~/.cellule-worker/bin/python -m iamine worker --auto

Manage the agent

# Status
launchctl list | grep cellule

# Live logs
tail -f ~/.cellule-worker/worker.log

# Stop
launchctl unload ~/Library/LaunchAgents/ai.cellule.worker.plist

# Start again
launchctl load ~/Library/LaunchAgents/ai.cellule.worker.plist

Windows

⬇ Quick download

cellule-worker-cpu.zip
~47 MB ZIP — any Windows PC, no GPU
⬇ Download
cellule-worker-cuda.zip
~260 MB ZIP — NVIDIA GPU (CUDA 12.5, v0.2.86 — v1.0.0 build in debug)
⬇ Download
cellule-proxy.zip
~45 MB ZIP — you bring your own LLM
⬇ Download

Download the ZIP for your variant, right-click → Properties → Unblock (SmartScreen), extract the folder anywhere, and double-click the .exe inside. See full release.

How the .exe works (read before running)

Extract the ZIP anywhere. Inside the folder you'll find cellule-worker-cpu.exe (or cuda/proxy) plus an _internal folder of dependencies. Double-click the .exe. The folder stays self-contained — you can move or copy it, just keep the structure.

  • First launch: benchmarks on Qwen 3.5 2B to score your hardware. Takes 1-2 min on a first-time install (model download).
  • The pool then assigns the best model for your machine and the worker auto-upgrades (downloads + reloads transparently).
  • To reset (fresh onboarding): close the window, delete assigned_model.json and config.json in the exe's folder (or wherever you launched from), relaunch.
  • Uninstall: close the window, delete the extracted folder. That's it — no registry, no services, no admin rights required.

Option A: standalone .exe — PREPROD / BETA to test

Three pre-built ZIP archives are published to GitHub Releases. Each contains the .exe plus its dependencies in a self-contained folder. PREPROD — please report any issue you hit.

Variant For Size Typical speed
cellule-worker-cpu.zip Any Windows PC (no GPU required) ~47 MB 5–15 t/s on modern CPUs (Qwen 2B–4B)
cellule-worker-cuda.zip Windows PC with NVIDIA GPU (CUDA 12+) ~300–500 MB 50–200 t/s on NVIDIA RTX (Qwen 4B–14B)
cellule-proxy.zip You already run your own LLM (llama-server / Ollama / vLLM / LocalAI) ~15–25 MB Depends on your upstream LLM (you bring it)
No GPU ? No problem. The CPU build runs on any Windows 10/11 machine — the network needs contributors of all kinds, not just gaming rigs. Your idle office laptop, your old desktop, your mini PC : all of them can serve inferences on Qwen 2B/4B at honest speed. Every worker strengthens the molecule.
Got an NVIDIA GPU ? If you have an NVIDIA RTX (20xx / 30xx / 40xx / 50xx), download the CUDA build instead — 5–10× faster, can serve bigger models (Qwen 14B+), more useful to the network. Both are community contributions, pick what matches your hardware.

Installation steps (both variants):

  1. Download the right .zip from GitHub Releases.
  2. Right-click the ZIP → Properties → Unblock (if Windows SmartScreen prompts), then extract the folder anywhere.
  3. Open the extracted folder and double-click the .exe — the worker benches your hardware, picks a model, and joins the network automatically.
PREPROD / BETA notice. These binaries are unsigned for now (SmartScreen will show a warning). Source is public and auditable on GitHub. The CI workflow that builds them is at .github/workflows/build-worker-exe.yml — 100% reproducible. If you prefer, use Option B (pip) below to install from source yourself.

Option B: pip install

Open PowerShell (not cmd.exe) and run:

# Install Python 3.12 first if missing — winget install Python.Python.3.12
py -3.12 -m venv "$env:USERPROFILE\.cellule-worker"
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install --upgrade pip
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install iamine-ai -i https://cellule.ai/pypi --extra-index-url https://pypi.org/simple
& "$env:USERPROFILE\.cellule-worker\Scripts\python.exe" -m iamine worker --auto

Auto-start at boot (Task Scheduler)

# Create a scheduled task that starts the worker at logon
$action = New-ScheduledTaskAction -Execute "$env:USERPROFILE\.cellule-worker\Scripts\python.exe" -Argument "-m iamine worker --auto"
$trigger = New-ScheduledTaskTrigger -AtLogon
$settings = New-ScheduledTaskSettingsSet -StartWhenAvailable -RestartCount 3 -RestartInterval (New-TimeSpan -Minutes 2)
Register-ScheduledTask -TaskName "CelluleWorker" -Action $action -Trigger $trigger -Settings $settings

Requirements

GPU acceleration (NVIDIA)

$env:CMAKE_ARGS="-DGGML_CUDA=on"
& "$env:USERPROFILE\.cellule-worker\Scripts\pip.exe" install --force-reinstall --no-binary=:all: llama-cpp-python

How the worker joins the network

  1. The worker starts in --auto mode: it benches your hardware (CPU, GPU, RAM, tokens/s).
  2. It contacts the public federated pools on cellule.ai and picks the one where its capability fills a gap best (M12 placement).
  3. It downloads the assigned GGUF model (auto-picked by your hardware profile — 2B / 4B / 7B / 14B / 30B MoE).
  4. It handshakes Ed25519 and starts accepting jobs over WebSocket.
  5. Your credits ($IAMINE) accumulate in your account for every inference your worker completes.
✓ PREPROD / testnet. $IAMINE credits have no market value yet. Every inference you run today is building the history that will be honored when the economy goes on-chain. Early contributors are tracked in worker_wallet_snapshots.

Requirements by model

Examples of real hardware
ModelRAMDiskSpeed target (CPU)GPU
Qwen 3.5 2B3 GB+1.3 GB8+ t/soptional
Qwen 3.5 4B5 GB+2.7 GB8+ t/soptional
Qwen 3.5 9B8 GB+5.5 GB8+ t/srecommended
Qwen 3.5 27B20 GB+16 GB4+ t/srecommended
Qwen 3.5 35B-A3B (MoE)25 GB+21 GB4+ t/srecommended

Don't know your specs ? Run the worker in --auto and it picks the best fit. Too small ? It will refuse gracefully and tell you why.

Proxy mode in depth

Proxy mode plugs an existing OpenAI-compatible LLM server into Cellule. The pool forwards inference requests to your upstream, you earn community credits per completion served.

Who is it for ?

Launch example

# 1. Start your local LLM server (example with llama.cpp)
llama-server -m /path/to/model.gguf --port 8000

# 2. In another terminal, start the Cellule proxy pointing at it
python -m iamine proxy --upstream http://localhost:8000 --model my-model-name

→ Full proxy mode documentation (template, config, troubleshooting)

Heads up. Proxy mode is for users who know what they expose. Your upstream LLM's capabilities (context length, model name, quality) must match what you advertise to the network, otherwise the pool's quality checker will blacklist your node.

Uninstall

Linux

systemctl --user disable --now cellule-worker
rm -rf ~/.cellule-worker ~/.config/systemd/user/cellule-worker.service
systemctl --user daemon-reload

macOS

launchctl unload ~/Library/LaunchAgents/ai.cellule.worker.plist
rm -rf ~/.cellule-worker ~/Library/LaunchAgents/ai.cellule.worker.plist

Windows

Unregister-ScheduledTask -TaskName "CelluleWorker" -Confirm:$false
Remove-Item -Recurse -Force "$env:USERPROFILE\.cellule-worker"

Troubleshooting

pip install fails with llama-cpp-python build error

Install the C++ build chain:

Worker doesn't appear in /v1/status

Check logs (see "Manage the service" above). Common causes: no network to cellule.ai:443, Python version too old, firewall blocking WebSocket.

Worker hogs my CPU

The worker only runs inferences when the network routes a job to it. Between jobs it's idle. You can cap CPU via systemd (CPUQuota=50%) or launchd (Nice = 10).

Security

← Back to homepage · Federation explained · Run your own pool · GitHub