Single binary. High-performance gRPC ingest. Zero-config "Time Travel" queries. Reflog brings the power of the Data Lakehouse to your application backend without the complexity.
┌──────────┬──────────┬───────────┐ │ id │ status │ plan │ ├──────────┼──────────┼───────────┤ │ user_123 │ pending │ free_tier │ └──────────┴──────────┴───────────┘
Architecture for Ops. Analytics for Free.
Forget generic "Big Data" clusters. Reflog is a single Rust binary. No JVM, no ZooKeeper, no headaches. Just run ./reflog-server and start ingesting.
Reflog keeps a perfect audit trail. Query your entity state as it existed 5 minutes ago, or 5 years ago.
Data is materialized into standard Parquet files. Query it instantly with DuckDB, Polars, or Amazon Athena.
Designed for Active/Passive failover using shared block storage (EBS/Azure Disk). Includes strict file locking and automatic crash recovery.
Reflog speaks gRPC. Define your entity schema in Protobuf, point your application at the Reflog node, and stop worrying about state consistency.
Reflog isn't just about moving bytes. It's about unlocking the value trapped in your application's history.
Stop digging through logs to answer "why did this user's balance change?". Generate human-readable timelines of every state change for any entity directly from your Parquet projections.
Train models on the exact same data structures they will see in production. Use the Python client to stream historical Parquet segments directly into Polars or DataFrames for training.
Test major architectural changes by replaying millions of records against a "shadow" service. Compare the resulting Parquet projections with your production baseline to catch regressions before they ship.
When a bug is reported, don't just look at the current state. Use the Reflog UI to see the exact state of the world when the error occurred. Replay the specific event sequence to reproduce the bug locally.