The Network

The Foundations of Decentralized Cloud Gaming.

YOM’s architecture is purpose-built for decentralized, high-performance cloud gaming. It combines a custom pixel streaming solution, secure edge computing, and intelligent decentralized orchestration to deliver interactive streaming with near-local responsiveness and reliability.

The platform spans from a proprietary pixel streaming services and developer SDKs to the underlying network of trusted GPU nodes coordinated by an AI-driven scheduler. Together, these components enable any AAA game to be streamed instantly to any device with sub-40 ms latency, high reliability (through rapid failover), and end-to-end security – all at a fraction of the cost of traditional cloud gaming.

Peer-to-Peer Pixel Streaming

YOM deploys its own lightweight pixel-streaming service that runs games on remote consumer-grade GPUs and streams the video feed back to players via WebRTC for minimal latency. Once a session is approved, the gameplay is delivered over a direct peer-to-peer connection between the node and the player’s device.

YOM’s streaming protocol establishes an encrypted video/audio stream and input channel without routing through any central server, minimizing latency by eliminating extra hops. Game audio/video and player input data are end-to-end encrypted, so the content remains private and secure even if it traverses relays or public networks.

Discovery & Attestation

During live operations and aAt the end of every gaming session, the node reports telemetry (FPS, 95‑th percentile latency, thermals); if thresholds are exceeded it can dynamically scale down/up concurrent player streams of games, depending on the outcome our scheduler.

Nodes publish signed availability records (location hash, GPU class, live load) to a distributed hash table / on‑chain registry. Clients query the same ledger, pick candidates within a latency radius, then request the node’s latest attestation. If the proof checks out the client proceeds; if not, the node is black‑listed until it re‑establishes trust. This serverless matchmaking eliminates single points of failure and allows the network to scale organically while remaining secure.

On shutdown—or if the boot drive is removed—the OS triggers a rollback routine that asserts a PCIe reset, DMA‑scrubs RAM and VRAM, clears any drive keys, and reboots, ensuring no player data or studio IP persists. Updates follow an atomic A/B scheme: a new signed image is written to the inactive slot and only activated after signature verification and checksum pass, with automatic rollback on failure.

The nodes boot a minimal, read‑only Linux image. On first launch the operating system:

  • Auto‑detects hardware & installs signed GPU drivers so the discrete card is exposed at full performance (no user intervention required).

  • Runs a registration module that authenticates the user, measures uplink bandwidth, RTT to regional probes, idle GPU capacity, and temperature; it then publishes this capability vector to the discovery ledger for HyperOrch scheduling.

  • Provides an isolated execution sandbox that mounts each game image read‑only, passes the GPU through exclusively.

HyperOrch: AI Scheduler

Workload allocation across the YOM network is handled by HyperOrch, YOM’s decentralized orchestration engine.

HyperOrch continuously gathers telemetry from each node – a multi-dimensional “capability vector” including available GPU/CPU capacity, memory, bandwidth, encoder load, thermal headroom, and more. Similarly, each game title is characterized by a resource footprint profile (GPU demand, memory needs, expected bitrate, etc.).

Using these inputs, HyperOrch’s AI-driven scheduler predicts the expected performance (e.g. framerate) for a given game on each candidate node and dynamically matches players to the optimal node in real time. Unlike naive approaches that just pick the nearest server, HyperOrch filters and ranks nodes by multiple criteria (player-to-node latency, hardware suitability for the game, current load, etc.) to find a host that can deliver smooth 60+ FPS and <40 ms ping for that session.

The orchestrator even learns from each session: it updates its model with actual performance outcomes, continuously improving placement decisions over time. This intelligent scheduling maximizes network-wide performance and efficiency – avoiding bad host-game pairings that might lag, fully utilizing available GPU headroom, and reducing costs by optimally balancing the load on community nodes.

In practical terms, HyperOrch’s real-time decision-making ensures each player is connected to a node that will give them a console-quality experience, and it can proactively migrate or redistribute sessions if a better match becomes available.

Last updated

Was this helpful?