Skip to main content
GenosDB’s synchronization protocol is designed to handle the reality of distributed networks where peers can have vastly different states. The engine intelligently switches between high-efficiency delta updates and guaranteed full-state fallback to ensure eventual consistency with optimal performance.

Overview

The Hybrid Delta Protocol is a two-mode synchronization engine:
  1. Delta Sync (Primary): Transmits only changes since last sync
  2. Full-State Sync (Fallback): Sends entire graph when needed
This architecture ensures:
  • Efficiency: Minimal bandwidth for frequent updates
  • Reliability: Guaranteed consistency even after long offline periods
  • Performance: Compressed payloads for fast transmission

High-Efficiency Delta Sync

For peers that are frequently communicating, transmitting the entire graph for minor changes is inefficient. GenosDB’s primary synchronization method is based on delta updates, powered by a persistent, local Operation Log (Oplog).

The Process, Step-by-Step

1

Operation Logging (Oplog)

Every local mutation (put, remove, link) is recorded as an entry in a capped, sliding-window log persisted in localStorage. Each entry contains:
  • Operation type (upsert, remove, link)
  • Affected node id
  • Precise Hybrid Logical Clock (HLC) timestamp
// Example oplog entry
{
  type: 'upsert',
  id: 'node-123',
  timestamp: { physical: 1709582400000, logical: 5 }
}
2

Sync Handshake

When a peer connects or wants to catch up, it broadcasts a sync request containing:
  • The HLC timestamp of the last operation it processed (globalTimestamp)
  • A brand-new peer sends globalTimestamp: null
// Sync request message
{
  type: 'sync',
  globalTimestamp: { physical: 1709582000000, logical: 12 }
}
3

Delta Calculation & Hydration

Upon receiving a sync request, the peer:
  1. Filters its Oplog for all operations with timestamp > globalTimestamp
  2. Hydrates any upsert operations by fetching the full current value from the graph
  3. Creates a self-contained array of complete operations
// Hydrated delta operations
const delta = oplog
  .filter(op => op.timestamp > globalTimestamp)
  .map(op => {
    if (op.type === 'upsert') {
      return {
        ...op,
        value: graph.get(op.id).value // Full current value
      };
    }
    return op;
  });
4

Minimal & Compressed Transfer

The delta array is:
  1. Serialized using MessagePack (binary format)
  2. Compressed with pako (deflate)
  3. Sent to the requesting peer in a deltaSync message
This minimal binary payload dramatically reduces bandwidth.
5

Efficient Application

The receiving peer:
  1. Decompresses the payload
  2. Deserializes from MessagePack
  3. Applies the batch of operations via conflict resolution
  4. Updates its globalTimestamp to the highest received
The graph state is now up to date with minimal processing overhead.

Performance Benefits

Bandwidth Reduction

Delta sync can reduce network traffic by 90-99% compared to full-state sync for active peers with frequent small changes.
Example comparison:
ScenarioFull StateDelta SyncSavings
10 new messages500 KB5 KB99%
100 updates500 KB50 KB90%
1 node change500 KB0.5 KB99.9%

Guaranteed Consistency Fallback

A delta update is only possible if a peer’s history overlaps with the Oplog of its peers. GenosDB’s engine gracefully handles scenarios where this is not the case by automatically triggering a Full-State Fallback.

Fallback Triggers

Full-state sync is initiated under two specific conditions:
A peer receives a sync request with a globalTimestamp that is older than the oldest operation in its Oplog.
// Oplog only keeps last 1000 operations
const oldestOplogTimestamp = oplog[0].timestamp;

if (requestTimestamp < oldestOplogTimestamp) {
  // Peer is too far behind, send full state
  triggerFullStateSync();
}
This happens when a peer has been offline longer than the oplog window.
A peer receives a sync request from a new peer where globalTimestamp is null.
if (requestTimestamp === null) {
  // Brand new peer, send full state
  triggerFullStateSync();
}
This ensures new peers receive the complete current state immediately.

The Fallback Process

1

Full-State Transmission

Instead of a delta, the up-to-date peer:
  1. Serializes its entire current graph state (all nodes and relationships)
  2. Compresses with pako
  3. Sends in a syncReceive message
const fullState = {
  graph: serialize(this.graph), // All nodes
  timestamp: this.hybridClock.now() // Current clock
};

const compressed = pako.deflate(msgpack.encode(fullState));
syncChannel.send({ type: 'syncReceive', data: compressed });
2

State Reconciliation & Reset

The desynchronized peer receives the full graph and performs critical reconciliation:
  1. Discards its outdated local graph state
  2. Replaces with the new graph
  3. Clears its own Oplog (previous history is invalid)
  4. Scans the new graph to find the highest HLC timestamp
  5. Fast-forwards its HybridClock to this timestamp
  6. Sets globalTimestamp to this value
// Reconciliation logic
this.graph = deserialize(receivedGraph);
this.oplog.clear();

const maxTimestamp = findMaxTimestamp(this.graph);
this.hybridClock.update(maxTimestamp);
this.globalTimestamp = maxTimestamp;
This ensures the peer is correctly positioned in time and can immediately participate in future delta syncs.
Full-state sync is more expensive but guarantees absolute consistency. The hybrid approach ensures it only happens when necessary.

Operation Log (Oplog) Architecture

The Oplog is a critical component that makes delta sync possible.

Configuration

const db = await gdb('mydb', {
  rtc: true,
  oplogSize: 1000  // Keep last 1000 operations (default)
});

Characteristics

  • Capped Size: Circular buffer, keeps most recent N operations
  • Persistent: Stored in localStorage to survive page refreshes
  • Sliding Window: Automatically evicts oldest operations when full
  • Timestamp Indexed: Fast filtering by HLC timestamp

Structure

// Oplog entry format
{
  type: 'upsert' | 'remove' | 'link',
  id: string,              // Node ID
  timestamp: {
    physical: number,      // Wall clock time
    logical: number        // Logical counter
  },
  // Additional metadata for specific operations
}

Limitations

The oplog window size determines how long a peer can be offline before requiring full-state sync:
Oplog SizeAvg Updates/MinOffline Window
10010~10 minutes
1,000 (default)10~100 minutes
10,00010~16 hours
1,000100~10 minutes
Configure oplogSize based on your application’s update frequency and expected offline periods.

Compression and Serialization

Both delta and full-state sync use the same compression pipeline:

Compression Ratios

Typical compression ratios:
  • Text-heavy data: 70-85% reduction
  • JSON with many keys: 60-75% reduction
  • Binary data: 10-30% reduction
  • Already compressed: Minimal reduction
// Example compression
const data = { /* 100KB of graph data */ };
const encoded = msgpack.encode(data);      // ~85KB (binary)
const compressed = pako.deflate(encoded);  // ~20KB (compressed)

// 80% reduction from original

Security Integration

When the Security Manager is enabled, synchronization containers are signed:
// Signed container for delta sync
{
  type: 'deltaSync',
  operations: [ /* delta operations */ ],
  signature: '0x...',           // Cryptographic signature
  originEthAddress: '0x...'     // Sender's address
}
Receiving peers:
  1. Verify signature before processing
  2. Check permissions for each operation
  3. Filter out unauthorized operations
  4. Apply only verified changes

Zero-Trust Sync

Even in full-state sync, every node’s lastModifiedBy is verified against RBAC rules, ensuring no invalid data propagates.

Synchronization Logs

In development, GenosDB provides clear sync logs:
🚀 [DELTA SYNC SENDING] Found 47 new operations to send.
 [DELTA SYNC RECEIVED] Applied 47 operations from peer-abc.
💥 [FALLBACK TRIGGERED] Peer is too far behind. Sending FULL state.
🔄 [FULL STATE RECEIVED] Replaced local graph with 1,234 nodes.
These help you understand sync behavior during debugging.

Best Practices

  • Higher oplogSize = longer offline tolerance but more memory
  • Lower oplogSize = less memory but more full-state syncs
  • Default (1000) works well for most applications
  • Frequent full-state syncs indicate peers going offline too long
  • Consider increasing oplogSize or implementing wake-on-network
  • Use the cellular mesh for large peer counts to reduce sync load
db.room.on('peer:join', (peerId) => {
  console.log('New peer joined:', peerId);
  // Expect full-state sync
});
  • Smaller nodes sync faster
  • Avoid storing large binary blobs in node values
  • Use links to reference external data

Performance Metrics

Based on real-world testing:
MetricDelta SyncFull State
Latency (100 ops)50-100ms200-500ms
Latency (1K ops)100-200ms500-1500ms
Bandwidth (100 nodes)5-10 KB50-200 KB
CPU usageVery LowModerate
UI blockingNoneNone

Hybrid Logical Clock

How timestamps enable delta sync

Worker Architecture

How synced data gets persisted

GenosRTC Architecture

Network layer that transports sync messages

Distributed Trust

How sync verifies security in a P2P network