v1.0 Now Available

The "SQLite" of
Event Sourcing

Single binary. High-performance gRPC ingest. Zero-config "Time Travel" queries. Reflog brings the power of the Data Lakehouse to your application backend without the complexity.

bash — reflog-cli
reflog ingest --entity user_123 --op UPDATE --payload '{"status": "active"}'
✓ Event committed to segment 4 (0.4ms)
duckdb -c "SELECT * FROM reflog.users AS OF TIMESTAMP '2024-01-01'"
┌──────────┬──────────┬───────────┐
│ id       │ status   │ plan      │
├──────────┼──────────┼───────────┤
│ user_123 │ pending  │ free_tier │
└──────────┴──────────┴───────────┘

Why Reflog?

Architecture for Ops. Analytics for Free.

Single Binary Deployment

Forget generic "Big Data" clusters. Reflog is a single Rust binary. No JVM, no ZooKeeper, no headaches. Just run ./reflog-server and start ingesting.

Native Time Travel

Reflog keeps a perfect audit trail. Query your entity state as it existed 5 minutes ago, or 5 years ago.

Open Storage Standards

Data is materialized into standard Parquet files. Query it instantly with DuckDB, Polars, or Amazon Athena.

Cloud-Native Resilience

Designed for Active/Passive failover using shared block storage (EBS/Azure Disk). Includes strict file locking and automatic crash recovery.

Drop-in Integration

Reflog speaks gRPC. Define your entity schema in Protobuf, point your application at the Reflog node, and stop worrying about state consistency.

  • Rust-based for stable, low-latency writes
  • Optional Redis Stream integration for buffering
  • Automatic "Delta + Base" compaction
// reflog-config.toml
server {
listen_addr = "0.0.0.0:50051"
storage_path = "/data"
}
compaction {
strategy = "delta_plus_base"
interval_seconds = 300
}
failover {
mode = "active_passive"
lock_file = "_meta/instance.lock"
}

Beyond the Event Store.

Reflog isn't just about moving bytes. It's about unlocking the value trapped in your application's history.

One-Click Customer Audits

Stop digging through logs to answer "why did this user's balance change?". Generate human-readable timelines of every state change for any entity directly from your Parquet projections.

# Query state for support ticket #882
reflog audit --entity user_123 --format pdf

ML Training/Serving Parity

Train models on the exact same data structures they will see in production. Use the Python client to stream historical Parquet segments directly into Polars or DataFrames for training.

import polars as pl
df = pl.read_parquet("data/segments/*.parquet")

High-Fidelity Shadow Deploys

Test major architectural changes by replaying millions of records against a "shadow" service. Compare the resulting Parquet projections with your production baseline to catch regressions before they ship.

// Validate new projection logic
✓ 125,000,000 events replayed
✓ 0.00% diff vs production

Time-Travel Debugging

When a bug is reported, don't just look at the current state. Use the Reflog UI to see the exact state of the world when the error occurred. Replay the specific event sequence to reproduce the bug locally.

reflog fork --timestamp "2024-05-12T14:30Z"