v1.0 Now Available

Capture Events Once.
Query Anywhere.

gRPC ingest, append-only durability, and automatic Parquet projections for full history (_events) and latest state (_current).

bash — reflog-cli
reflog ingest --entity user_123 --op UPDATE --payload '{"status": "active"}'
✓ Event committed to segment 4 (0.4ms)
duckdb -c "SELECT * FROM reflog.users AS OF TIMESTAMP '2024-01-01'"
┌──────────┬──────────┬───────────┐
│ id       │ status   │ plan      │
├──────────┼──────────┼───────────┤
│ user_123 │ pending  │ free_tier │
└──────────┴──────────┴───────────┘

Why Reflog?

Architecture for Ops. Analytics for Free.

Single Binary Deployment

Forget generic "Big Data" clusters. Reflog is a single Rust binary. No JVM, no ZooKeeper, no headaches. Just run ./reflog-server and start ingesting.

Dual Projections

Get two query shapes from one ingest path: _events for immutable history and _current for latest entity state.

Open Storage Standards

Reflog writes standard Parquet outputs so your data stays portable across DuckDB, Spark, Polars, DataFusion, and warehouse/lakehouse tooling.

Operational Safety

Segmented processing, checkpoints, and crash-recovery patterns keep ingestion simple while projections remain reliable and deterministic.

Architecture At A Glance

Reflog keeps the write path lightweight while producing analytics-ready outputs. Events flow through append-only segments and are projected in the background to query-friendly Parquet tables.

  • gRPC ingest accepts create/update/delete entity events
  • Events are durably written to append-only segments
  • Background projection builds _events and _current
// query examples
SELECT * FROM reflog._events
WHERE entity_id = 'user_123'
ORDER BY event_time DESC;

SELECT * FROM reflog._current
WHERE entity_id = 'user_123';

Beyond the Event Store.

Reflog isn't just about moving bytes. It's about unlocking the value trapped in your application's history.

Browser-Based Entity Explorer

Use the Web UI to browse entity types, inspect records, and see full change history without building custom dashboards first.

# Web UI workflow
Entities → users → user_123
History → ordered event timeline

ML Training/Serving Parity

Build feature pipelines from the same Parquet projections used in production analytics. Combine event history and latest state for reproducible training sets.

import polars as pl
events =
pl.read_parquet("/rl/_events/*.parquet")
current = pl.read_parquet("/rl/_current/*.parquet")

Deterministic Replay & Backfill

Rebuild projections from append-only history when logic evolves. Deterministic processing makes validation and backfills safer as schemas or downstream models change.

// Reprocess closed segments
✓ checkpoint restored
✓ projections rebuilt

Time-Travel Debugging

Investigate incidents with both views: trace exact event history from _events and confirm current truth in _current. Query with your existing SQL tools.

duckdb -c "SELECT * FROM reflog._events WHERE entity_id='user_123'"