Show HN: Try ByteDance's Seedance 2.0 AI video model

  • Posted 3 hours ago by jackson_mile
  • 1 points
https://laike.ai/tools/seedance-2
Hi HN,

I’m building an AI video playground where people can try different text-to-video models in one place. We recently added support for Seedance 2.0, ByteDance’s latest video model, and I thought some of you might like to experiment with it.

https://laike.ai/tools/seedance-2

A few things that stood out when testing Seedance 2.0:

It often generates multi-shot sequences instead of a single static shot

Motion can be quite smooth compared to earlier T2V models

It supports native audio generation

Prompts with camera movement or cinematic language tend to work well

The goal of the site isn’t just Seedance — it’s to let people compare different video models under a similar interface. I found it frustrating that every model lives in a separate UI with different credit systems and prompt formats.

Some honest notes:

Outputs are still inconsistent

Prompting quality matters a lot

Long narrative coherence is hard for all current models

Running these models is expensive, so there are usage limits

If you’re experimenting with generative video, I’d love to hear:

What prompts work well for you

Where current models fail

What tooling you wish existed around text-to-video

Happy to answer questions about the product or setup.

1 comments

    Loading..