So I vibe-coded up an app with claude. (I am a developer, but I don't know javascript at all).
You can run it locally if you have LLM api keys, or I have it running at:
https://reverse-turing-pi.vercel.app/
until my openai/vercel credits run out.
The LLM will ask you 3 questions, then it will interview an LLM, and then it will try to guess which one is the human pretending to be an LLM. The LLM interviewee has been instructed to give brief answers, so you don't need to write a novel. A few sentences should be good enough? I don't know, I've never been able to fool it. And this is with it only running 4o-mini.
I'm not storing anything, not even in logs. Please don't do anything that will violate OpenAI's terms, but feel free to try and fool the prompt anyway you like, including prompt injections. A win is a win.
You can even just copy and paste your answers from an LLM if you want, but that just kind of proves the point.