Skip to main content
Tusky is a managed infrastructure layer on top of the Walrus decentralized storage network. This page describes the key components and data flows that power the platform.

High-Level Overview

┌─────────────┐     ┌─────────────┐     ┌──────────────────┐     ┌───────────────┐
│   Client     │────▶│  Tusky API  │────▶│  Queue Workers   │────▶│ Walrus Network│
│  (SDK/API)   │     │  (REST)     │     │  (Publishers)    │     │               │
└─────────────┘     └─────────────┘     └──────────────────┘     └───────────────┘
                           │                                            │
                           ▼                                            ▼
                    ┌─────────────┐                             ┌───────────────┐
                    │ PostgreSQL  │                             │  Aggregators  │
                    │ + Redis     │                             │               │
                    └─────────────┘                             └───────┬───────┘


                                                                ┌───────────────┐
                                                                │   CDN (CF)    │
                                                                │ + Aggregator  │
                                                                └───────────────┘

Write Path

When you upload a file through the SDK or API, this is the journey your data takes:
1

Upload received

The file is sent to the Tusky REST API. For encrypted environments, the SDK encrypts the file client-side before transmission.
2

Temporary storage

The file is temporarily stored in a staging area while it awaits processing. This ensures fast upload responses even under high load.
3

Batched into quilts

Small files are batched together into quilts — composite blobs that reduce the number of individual Walrus transactions and lower per-file costs.
4

Published to Walrus

Queue workers pick up batches and publish them to the Walrus network using the user’s dedicated managed wallet. The worker handles encoding, storage node coordination, and transaction signing.
5

Metadata recorded

Once published, the blob ID, quilt patch ID, and storage details are recorded in PostgreSQL. The file is now accessible through your aggregator.
The batching step is transparent to users. Each file retains its own identity and metadata — quilts are an internal optimization for cost and throughput.

Read Path

When a client requests a file through your aggregator:
1

Request arrives

The client sends an HTTP request to your private aggregator URL (e.g. https://myproject.mytusky.xyz/qp_abc123).
2

CDN edge check

Cloudflare’s edge network checks for a cached copy. If the file is cached, it is returned immediately from the nearest edge node.
3

Aggregator fetch

On a cache miss, the request is routed to Tusky’s aggregator infrastructure. The aggregator contacts Walrus storage nodes to reconstruct the blob.
4

Response delivered

The file is returned to the client and cached at the CDN edge for future requests.

Core Components

REST API

The central interface for all Tusky operations. Handles authentication, authorization, environment/file CRUD, and orchestrates the write and read paths.
  • Built with modern web frameworks on Kubernetes
  • Horizontally scalable behind a load balancer
  • Rate-limited and protected by Cloudflare WAF

Queue Workers (Publishers)

Background workers that process the write path:
  • Pick up uploaded files from the staging area
  • Batch small files into quilts
  • Publish blobs to Walrus using managed wallets
  • Handle retries and error recovery
Each user’s publishing operations are isolated through dedicated managed wallets. This ensures one user’s activity cannot affect another’s publishing capacity.

Aggregators

Aggregators reconstruct blobs from the Walrus network and serve them through aggregators:
  • Primary aggregator fleet for standard operations
  • Fallback aggregator on separate infrastructure for resilience
  • Optimized for low-latency blob reconstruction

Managed Wallets

Every Tusky user gets a dedicated Sui wallet, managed by Tusky’s infrastructure:

Database Layer

SystemPurpose
PostgreSQLPrimary state store — environments, files, members, accounts, transaction history.
RedisCaching layer for hot metadata, rate limiting counters, and queue management.

CDN (Cloudflare)

All aggregator traffic passes through Cloudflare:
  • Edge caching at 300+ locations worldwide
  • WAF for DDoS protection and request filtering
  • HTTP/3 and Brotli compression for optimal performance
  • Custom rules for enterprise customers

Infrastructure

All Tusky services run on Kubernetes, providing:

Auto-scaling

Worker pools and API replicas scale automatically based on load.

Rolling Deploys

Zero-downtime deployments for API and worker updates.

Health Monitoring

Continuous health checks with automatic pod replacement on failure.

Resource Isolation

Services are isolated with resource limits to prevent noisy-neighbor effects.

What’s Next