Show HN: ResourceAI – Local LLM inference optimized for consumer iGPUs

  • Posted 5 hours ago by Fenix46
  • 1 points
It was created with the aim of serving LLM models on consumer hardware, with a focus on portable machines. The basic concept is to optimize inference on integrated iGPUs, with excellent results. The project is open source, and you can take a look.

It was created with a Rust backend, Flutter frontend, and llama.cpp as the inference engine.

There is currently full support for macOS with silicon chips, and for Windows with Vulkan, which has a broader base of hardware it can handle.

RAG has been implemented and web search is currently available.

But the project is alive and other new implementations will gradually be made.

I'll leave you the website where you can download it, as well as the GitHub page.

https://resourceai.fenixresource.com https://github.com/orgs/ResourceAI-app/repositories

0 comments