Show HN: Gonfire – Assess how well candidates steer AI coding agents

  • Posted 3 hours ago by abr0ahm
  • 1 points
When I graduated from a CS program in 2020, leetcode was basically a SWE entrance exam. Your ability to solve a coding puzzle thrown at you on the spot determined your fate.

Recently, I’ve interviewed for a handful of “AI Engineer” positions at several startups and I noticed a shift in the format of technical assessments. Timed OAs and live leetcoding have been replaced with a “case study” format where AI use is encouraged. These were the two main patterns I saw:

1. Take home: Candidate downloads starter code with README. They complete the assignment according to the instructions using any tools they would like, then submit the code.

2. Live assessment: Same as #1 but candidate is live on a call with an interviewer with screenshare. The interviewer observes the candidate to assess how they solve the problem using AI.

Both of these formats still seem broken. Reviewing a submitted take home solution involves the HM sifting through a codebase that is entirely AI generated and reveals little about the candidate’s problem solving ability. Live assessments takes a whole hour of time from the interviewer (which was often the CTO) per candidate.

Moreover they are throwing away the most valuable piece of info: the claude code session log.

I built Gonfire, which consists of a proxy which records and analyzes a candidate’s claude code interactions while solving the assessment and displays a digestible report to a hiring manager.

I took an assessment myself, you can view my results in the demo below.

Live demo: https://app.gonfire.io (showhn@gonfire.io / Aa123123123123)

Relevant post from Anthropic: <https://www.anthropic.com/engineering/AI-resistant-technical...>

0 comments