Show HN: Zeptaframe – Open-source click-and-drag precision for AI video gen

  • Posted 11 hours ago by Pablerdo
  • 3 points
https://github.com/Pablerdo/zeptaframe
I built Zeptaframe to solve a major issue with AI video generation: the lack of precision with prompting video models.

Why it matters:

Controllability is one of the main constraints in the adoption of AI video for serious tasks. Thus, precise prompting interfaces must be explored.

Zeptaframe implements an image canvas with automatic segmentation using in-browser SAMv2, and a click and drag interface. This allows the user to precisely define the motion of any object, human, as well as camera motion to the video diffusion process.

The back-end pipeline implements the Go-with-the-Flow paper (Oral Presentation at CVPR 2025) where I was invited to demo it (I built this for a college project).

I still have some funds from the conference, so you can have 2 free generations with the CVPR2025 code. You will NOT get a confirmation email.

The demos in the Github repo show how to use the platform. Here is my favorite one: https://www.youtube.com/watch?v=AHrVIavRmkk

Here is the link to the live site: https://zeptaframe.com/

Excited to see what HN thinks!

0 comments