Overview
The Hybrid Delta Protocol is a two-mode synchronization engine:- Delta Sync (Primary): Transmits only changes since last sync
- Full-State Sync (Fallback): Sends entire graph when needed
- Efficiency: Minimal bandwidth for frequent updates
- Reliability: Guaranteed consistency even after long offline periods
- Performance: Compressed payloads for fast transmission
High-Efficiency Delta Sync
For peers that are frequently communicating, transmitting the entire graph for minor changes is inefficient. GenosDB’s primary synchronization method is based on delta updates, powered by a persistent, local Operation Log (Oplog).The Process, Step-by-Step
Operation Logging (Oplog)
Every local mutation (
put, remove, link) is recorded as an entry in a capped, sliding-window log persisted in localStorage. Each entry contains:- Operation
type(upsert, remove, link) - Affected node
id - Precise Hybrid Logical Clock (HLC)
timestamp
Sync Handshake
When a peer connects or wants to catch up, it broadcasts a
sync request containing:- The HLC timestamp of the last operation it processed (
globalTimestamp) - A brand-new peer sends
globalTimestamp: null
Delta Calculation & Hydration
Upon receiving a
sync request, the peer:- Filters its Oplog for all operations with
timestamp > globalTimestamp - Hydrates any
upsertoperations by fetching the full currentvaluefrom the graph - Creates a self-contained array of complete operations
Minimal & Compressed Transfer
The delta array is:
- Serialized using MessagePack (binary format)
- Compressed with pako (deflate)
- Sent to the requesting peer in a
deltaSyncmessage
Performance Benefits
Bandwidth Reduction
Delta sync can reduce network traffic by 90-99% compared to full-state sync for active peers with frequent small changes.
| Scenario | Full State | Delta Sync | Savings |
|---|---|---|---|
| 10 new messages | 500 KB | 5 KB | 99% |
| 100 updates | 500 KB | 50 KB | 90% |
| 1 node change | 500 KB | 0.5 KB | 99.9% |
Guaranteed Consistency Fallback
A delta update is only possible if a peer’s history overlaps with the Oplog of its peers. GenosDB’s engine gracefully handles scenarios where this is not the case by automatically triggering a Full-State Fallback.Fallback Triggers
Full-state sync is initiated under two specific conditions:1. Peer Too Far Behind
1. Peer Too Far Behind
A peer receives a This happens when a peer has been offline longer than the oplog window.
sync request with a globalTimestamp that is older than the oldest operation in its Oplog.2. New Peer Joining
2. New Peer Joining
A peer receives a This ensures new peers receive the complete current state immediately.
sync request from a new peer where globalTimestamp is null.The Fallback Process
Full-State Transmission
Instead of a delta, the up-to-date peer:
- Serializes its entire current graph state (all nodes and relationships)
- Compresses with pako
- Sends in a
syncReceivemessage
State Reconciliation & Reset
The desynchronized peer receives the full graph and performs critical reconciliation:This ensures the peer is correctly positioned in time and can immediately participate in future delta syncs.
- Discards its outdated local graph state
- Replaces with the new graph
- Clears its own Oplog (previous history is invalid)
- Scans the new graph to find the highest HLC timestamp
- Fast-forwards its HybridClock to this timestamp
- Sets
globalTimestampto this value
Operation Log (Oplog) Architecture
The Oplog is a critical component that makes delta sync possible.Configuration
Characteristics
- Capped Size: Circular buffer, keeps most recent N operations
- Persistent: Stored in
localStorageto survive page refreshes - Sliding Window: Automatically evicts oldest operations when full
- Timestamp Indexed: Fast filtering by HLC timestamp
Structure
Limitations
The oplog window size determines how long a peer can be offline before requiring full-state sync:| Oplog Size | Avg Updates/Min | Offline Window |
|---|---|---|
| 100 | 10 | ~10 minutes |
| 1,000 (default) | 10 | ~100 minutes |
| 10,000 | 10 | ~16 hours |
| 1,000 | 100 | ~10 minutes |
Configure
oplogSize based on your application’s update frequency and expected offline periods.Compression and Serialization
Both delta and full-state sync use the same compression pipeline:Compression Ratios
Typical compression ratios:- Text-heavy data: 70-85% reduction
- JSON with many keys: 60-75% reduction
- Binary data: 10-30% reduction
- Already compressed: Minimal reduction
Security Integration
When the Security Manager is enabled, synchronization containers are signed:- Verify signature before processing
- Check permissions for each operation
- Filter out unauthorized operations
- Apply only verified changes
Zero-Trust Sync
Even in full-state sync, every node’s
lastModifiedBy is verified against RBAC rules, ensuring no invalid data propagates.Synchronization Logs
In development, GenosDB provides clear sync logs:Best Practices
Configure Oplog Size Appropriately
Configure Oplog Size Appropriately
- Higher
oplogSize= longer offline tolerance but more memory - Lower
oplogSize= less memory but more full-state syncs - Default (1000) works well for most applications
Monitor Sync Patterns
Monitor Sync Patterns
- Frequent full-state syncs indicate peers going offline too long
- Consider increasing
oplogSizeor implementing wake-on-network - Use the cellular mesh for large peer counts to reduce sync load
Handle Sync Events
Handle Sync Events
Optimize Data Structure
Optimize Data Structure
- Smaller nodes sync faster
- Avoid storing large binary blobs in node values
- Use links to reference external data
Performance Metrics
Based on real-world testing:| Metric | Delta Sync | Full State |
|---|---|---|
| Latency (100 ops) | 50-100ms | 200-500ms |
| Latency (1K ops) | 100-200ms | 500-1500ms |
| Bandwidth (100 nodes) | 5-10 KB | 50-200 KB |
| CPU usage | Very Low | Moderate |
| UI blocking | None | None |
Related Pages
Hybrid Logical Clock
How timestamps enable delta sync
Worker Architecture
How synced data gets persisted
GenosRTC Architecture
Network layer that transports sync messages
Distributed Trust
How sync verifies security in a P2P network