I build data platforms (Snowflake, dbt, Airflow) and kept seeing the same issue: starting a clean analytics stack is harder than it should be. Not because of tools — but because of patterns.
How do you structure raw vs staging vs analytics layers? How do you ingest without creating a mess? How do you avoid rebuilding the same scaffolding every time?
So I pulled the patterns I use into something reusable.
ClawData is a skills library for OpenClaw that encodes practical ingestion and modelling workflows. It’s less about generating SQL and more about enforcing structure.
You can run it locally:
git clone https://github.com/clawdata/clawdata.git cd clawdata ./setup.sh
It checks for OpenClaw, installs if needed, and lets you select skills (DuckDB, dbt-style modelling, Snowflake patterns, etc.).
It’s early. I’m still figuring out the right abstractions.
Would appreciate feedback — especially on whether encoding data engineering patterns this way makes sense.