Before diving into the details, I want to mention I set out with a clear goal: how much can I get the AI to do?
I did several iterations over implementation, documentation design, logging, and testing. It was absolutely important that the AI understood that the documentation was the authority on design. Implementation was the plan that needed to be followed. Everything needed to be logged, even the smallest of changes. And under no circumstances were tests to be deleted or modified unless there was a significant architectural change.
So it had to write tests based on the documented designs. I’ve racked up almost 1200 tests.
The process was not perfect. It was definitely frustrating at times. But I set out and made something useful with AI in a hobby sense. Please go easy on me for using AI
Repo: [https://github.com/JJLDonley/Simple](https://github.com/JJLDonley/Simple)
I built it with C++, and I don’t know if it will work out of the box on Windows or macOS. I haven’t been able to test those yet.
Simple is a type-strict VM. Simple currently supports:
* A full .simple → SIR → SBC → VM pipeline
* A bytecode loader, verifier, and interpreter runtime
* CLI workflows for run, check, build/compile, and emit
* Language features including functions, control flow, arrays/lists, modules/imports, enums/artifacts, and pointers for FFI boundaries
* Dynamic extern calls via Core.DL with typed signatures (including struct/by-value interop)
* Project-root local import resolution and improved compiler diagnostics with source spans
* Installer/release flow and a growing test suite across core, IR, JIT, and language paths
I’m looking for feedback on usability, and IR/VM design tradeoffs. Ask me anything about architecture, constraints, or roadmap.