← Back to Documentation

Client User Guide

v2.9.21 — Everything you need to submit, manage, and monitor GPU compute jobs.

Getting Started

  1. Sign up at app.simulacrumlabs.com
  2. Purchase a credit pack (see Billing below)
  3. Upload your project (.zip or .7z — UE5 scene, Python training script, or Docker Only package)
  4. Configure job settings and submit

Pricing

All jobs run at a flat rate of $0.75/GPU-hour with hardened Docker containers, ClamAV scanning, and Proof of Compute verification included.

Billed per second. You only pay for the exact compute time used — no rounding, no minimums.

Security

All jobs run in hardened Docker containers with --cap-drop=ALL, --no-new-privileges, memory limits, and PID limits. All project data is securely wiped after job completion.

Billing & Credits

Credit Packs

Credits are stored as dollar amounts. Purchase packs via Stripe on the dashboard:

Pack GPU Hours
$25 ~33 hrs
$100 ~133 hrs
$250 ~333 hrs

Per-Second Billing

Jobs are metered per second. Your credit balance updates in real time on the dashboard. If your balance reaches zero during a job, the job is paused and you have 5 minutes to add credits before it is cancelled.

Security Features

ClamAV Malware Scanning

Every project upload is scanned by ClamAV before it is distributed to nodes. If malware is detected, the upload is rejected and you are notified immediately. This protects both you and the operators running your jobs.

Proof of Compute

Simulacrum verifies that nodes are actually performing your work using Proof of Compute (PoC). During rendering jobs, the orchestrator periodically requests a frame buffer snapshot from the node and verifies it using perceptual hashing (pHash). Training jobs are verified by checking GPU utilization. If a node fails verification, it is penalized and your job is reassigned to a different node.

PoC is fully automatic. You don't need to configure anything — every job is verified.

Container Hardening

All jobs run inside hardened Docker containers with --cap-drop=ALL, --no-new-privileges, memory limits, and PID limits. These security flags ensure workloads cannot escalate privileges or escape the container.

API Key Management

API keys allow you to submit and manage jobs programmatically without using the web dashboard.

Creating Keys

  1. Go to Settings > API Keys on the dashboard
  2. Click Create API Key
  3. Copy the key immediately — it is only shown once

Include the key in your requests as a header: Authorization: Bearer YOUR_API_KEY

Revoking Keys

Click the revoke button next to any key in Settings. Revoked keys stop working immediately. Active jobs submitted with a revoked key continue to completion but no new jobs can be submitted.

Dual Authentication

For sensitive operations (cancelling jobs, deleting projects, managing billing), the API requires both your API key and a session cookie from the dashboard. This prevents a leaked API key from being used to modify your account.

Job Modes

Simulacrum supports four job modes:

Annotations (Single Render)

Upload a UE5 scene and render frames with annotations. The node launches UE5, captures the output, and uploads rendered frames to cloud storage.

UE5 + Python Training

Upload a UE5 project with a Python training script. The node launches UE5 and runs your Python code in a Docker container alongside it. Supports AirSim, reinforcement learning, and custom simulation loops. AirSim settings.json is automatically injected.

Parameter Sweeps

Submit a sweep configuration to run the same scene with varying parameters. Each sub-job is dispatched independently and can run on different nodes in parallel.

Docker Only

Run Python/GPU workloads in Docker containers without UE5. Ideal for IsaacSim, Gazebo, PyBullet, pure ML training, or any GPU compute that doesn't need Unreal Engine.

Docker Only jobs only require Docker on the operator's node — no UE5 installation needed. This means more nodes are eligible and your job starts faster.

How to Submit a Docker Only Job

  1. Select the Docker Only render mode on the submit form
  2. Upload a .zip or .7z containing your Python project
  3. Enter the Python script filename (defaults to main.py)
  4. Optionally add Python arguments
  5. Submit — your code runs in a hardened Docker container with GPU passthrough

Job Lifecycle

  1. Upload — Project is uploaded to Azure Blob Storage and scanned by ClamAV
  2. Queue — Job enters the queue and waits for a matching node (GPU, bandwidth)
  3. Dispatch — Orchestrator assigns the job to the best available node
  4. Execute — Node downloads the project, runs it in a hardened Docker container, streams progress
  5. Verify — Proof of Compute checks run periodically during execution
  6. Complete — Results are uploaded to Azure and available for download on the dashboard

Tip: Jobs are prioritized by node bandwidth and reputation. Higher-bandwidth nodes receive your job faster, and operators with high reputation scores are preferred.