Need help?
<- Back

Comments (114)

  • gcp123
    I've spent the last decade watching this arms race between interviewers and candidates. Last month I hired a senior dev who couldn't implement a basic database migration when we brought him on but aced our interview problems. Turned out he'd been using tools like this.The problem isn't the tools - they're inevitable. The problem is that our industry clings to this bizarre ritual where we test for skills that are completely orthogonal to the actual job.My current team scrapped the algorithmic questions entirely. We now do pair programming on a small feature in our actual codebase, with full access to Google/docs/AI. The only restriction is we watch how they work. This approach has dramatically improved our hit rate on good hires.What I care about is: Can they reason through a problem? Do they ask good clarifying questions? Can they navigate unfamiliar code? Do they know when to use tools vs when to think?These "invisible AI" tools aren't destroying technical interviews - they're just exposing how broken they already were.
  • danielvaughn
    My approach to technical interviews is just to talk shop with the candidate for an hour.Throughout the conversation, we mostly stay light and touch on a lot of different topics. But every so often, I’ll drill in and start discussing some random topic at depth. If you drill in just 2-3 times throughout the interview, you get a pretty clear picture of the candidates average depth of knowledge.Not only is this LLM proof, but you also get a sense of their opinions, their interests, their passion, etc.
  • sbuttgereit
    So, am I missing something? Is this just a tool allowing job candidates to commit fraud?I'm no lawyer, so I'm not sure this would rise to the level of actual legal fraud, but moral fraud at the very least is what I think I'm seeing.EDIT:Now I see it at the bottom of the page... "Interview Coder is a desktop app designed to help job seekers ace technical interviews by providing real-time assistance with coding questions."So yes, it's exactly what it looks like.
  • Ozzie_osman
    I'm seeing a lot of justification for this tool (on the tool's page and in the comments here) based on the "LeetCode is bad, companies shouldn't test for orthogonal skills".While I agree with that sentence broadly, tools like this undermine the process even for non-orthogonal skills. For instance, we administer System Design interviews and Practical Coding interviews (usually, we give the candidate a code base and ask them to make a modification to it)—things that are not LeetCode and are pretty relevant to day-to-day work. We actually let candidates use AI, as long as they show how they're using it. Tools like still undermine our process even for those types of interviews.I'm a realist and understand that tools like this are inevitable. But I don't think they're ethical, and I think the "Fuck Leetcode" argument justifies their existence. In general, trickery is wrong (whether it's companies doing it, or candidates).
  • darrenkopp
    I interview people regularly. These are easy to detect… not in a direct way but i can tell when you are being assisted by AI. 3 so far this month out of 12, for those who were going to ask how frequently.
  • MOARDONGZPLZ
    This is why we have moved all interview loops to in-person. I highly recommend everyone who is a hiring manager do 100% of their loops in person. Granted, the coding section isn't 100% of the interview, but it's very important.
  • 18172828286177
    Great, I cant wait to go back to onsite interviews where I have to spend an entire day (at least) getting to some random office and sitting in an uncomfortable chair to do my on-sites
  • exrhizo
    Rather than banning AI in technical interviews, better to see how the candidates use it and if they can comprehend what the LLM is saying, the quality of their prompts and own thinking.
  • jlcases
    Interesting approach. The effectiveness of any AI, especially in nuanced scenarios like interviews, hinges on how well its underlying knowledge is structured. For an 'invisible AI interviewer' to ask relevant, probing questions, it needs more than just data—it requires a structured understanding of the domain.I've found that applying MECE principles (Mutually Exclusive, Collectively Exhaustive) to knowledge domains dramatically improves AI performance in complex tasks. It ensures comprehensive coverage without redundancy, allowing the AI to navigate concepts more effectively. This seems particularly relevant for assessing candidate depth versus breadth.
  • ErikAugust
    The industry seems so divided on AI right now.We have interviews where we aren't allowing the use of it (yet interviewees are using stealth AIs to cheat). At the same time, I am also hearing of organizations mandating the use of it, ie: "20% of the code committed needs to be generated". There's probably a set of orgs that exist that do not allow the use of AI in coding interviews, yet practically mandate the use of AI in day-to-day work!We are at an inflection point I think, but my guess is AI is going to win out soon enough.
  • bilsbie
    I think this is a cool idea:It’s a platform where referrals can register and then they put some money on the line. Say $20.When an employer calls my referrals through the platform if I end up getting fired in the first six month the people that referred me lose their money (and reputation). If I stay on they get paid that amount of money.Feel free to tweak the idea but I think it would be great to hire based on referrals in a trustable way.
  • CSMastermind
    I've simply tell candidates to use AI as part of the interview process now. It functionally changes nothing about the evaluation.
  • anon
    undefined
  • poincaredisk
    Great, another paid tool to cheat at coding interviews. I guess the future is coming back to on-site interviews only.
  • gnabgib
    Related:The Leader of the LeetCode Rebellion: An Interview with Roy Lee (70 points, 9 days ago, 44 comments) https://news.ycombinator.com/item?id=43497848I got kicked out of Columbia for taking a stand against LeetCode interviews (20 points, 9 days ago, 18 comments) https://news.ycombinator.com/item?id=43497652
  • siva7
    This only results in that technical interviews won't be done remote or as homework in the future. Even before covid i wouldn't have recommended this remote interview approach. The by far best results in interviewing were a technical talk-through over their past experiences or some short pair-developer task (mob programming or refinement) were they can use whatever tool they want if they lack experience to talk about - i wanna see how they tackle real problems by asking good questions. Hardly to fake even with advanced ai tools if the interviewer is a very experienced engineer.
  • swat535
    So at our company, we stopped asking algorithm questions in interviews.Instead, our process starts with a one-hour technical conversation. We talk through the candidate's experience, how they think about systems and products, and dig into technical topics relevant to our stack (Ruby on Rails). This includes things like API design, ActiveRecord, SQL, caching, and security.If that goes well, the next step is a collaborative pull request review. We have a test project with a few PRs, and we walk through one with the candidate. We give them context on the project, then ask for their feedback. We're looking for how they communicate, whether they spot design issues (like overly long parameter lists or complex functions), and how they reason about potential bugs.This has worked really well for us. We've built a strong, pragmatic engineering team. Unfortunately though, none of us now remember how to invert a binary tree without Googling it..
  • macNchz
    I think if I were hiring remotely right now I’d look to create exercises that could be done “open book” using AI, but that I’d validated against current models as something they don’t do very well on their own. There are still tons of areas where the training data is thinner or very outdated, and there’s plenty of signal in seeing whether someone can work through that on their own and fix the things the LLM does wrong, or if they’ve entirely outsourced their problem solving ability.
  • koliber
    When doing a tech interview, watch the person’s eyes. Pay attention to the pacing of their answers.If they seem to be reading intently, that’s a flag. If their answers are fluffy and vague and then get very specific, that’s a flag.Tools like this might not show up on shared screens, but people who use them behave unnaturally. It’s pretty obvious if you know what to look for.I’ve been doing dozens of technical interviews per month and it’s pretty clear when the person is Googling answers or using some ai tool.
  • jellyfishbeaver
    Seems like a whole new market is opening for people looking to game the hiring process. In my short few years being involved in interviewing, I've seen 1) obvious AI use off screen/a second person feeding answers 2) Person A showing up for the interview process and Person B showing up after being hired 3) candidates covering their lips moving with a large headset mic and someone else speaking for them.Wild
  • alistairSH
    If the problems are tailored to the role and the job requirements can be completed using AI, isn’t this sort of the correct outcome?If you have job requirements that extend beyond “trivially completable with AI” ask questions that aren’t trivially completable with AI.
  • bilsbie
    I wish hiring was: pull a ticket out of your system and work through it with the interviewee.They could get a sense of what type of work they’d be doing and the competence of the organization and you’d get to see how they perform in the real world.
  • anon
    undefined
  • hoerzu
    If you don't want install a binary I found way cheaper option for an interview assistant: https://interview.sh
  • serverlessmania
    Getting Amazon and k8s certifications will mean nothing with tools like this
  • ram4jesus
    This is the funniest thing I have seen all week. Burn the process to the ground. Hahaha.
  • sarchertech
    Can we just stop doing performative technical interviews already? The only other industry that does interviews the way we do them for senior people is the performing arts and performing is right in the name.The engineers I’ve worked with who’ve caused the most damage by far, have have been technical tornadoes who did fine in interviews.I’ve never seen any damage caused by someone who slipped through the cracks without being able to code at all (and I’ve worked with people like that).I think we’d be much better off if we just fired people who outright lie about their ability to code and spent more time digging into previous employment history, talking though projects, and talking to old coworkers.
  • xyst
    The fact that AMZN still relies on outdated methodology, such as leetcode problems, to assess _senior_ candidates shows the company is out of touch.
  • rvz
    Good. This destroys a system that is already broken and renders Leetcode and others useless for evaluating candidates in the AI era.The most ironic thing is that this qualifies for "hacking a system to your advantage" for Y Combinator.Those that are upset by Interview Coder as 'cheating' are themselves bounded by an outdated system waiting to be disrupted and Interview Coder is the result of that.The only way to stop this and is to return to onsite interviews. Which reduces cheaters to >98%.This spells the definitive end of Leetcode and the rest of the online assessment tools.
  • the_real_cher
    this is great because it's just going to bring back in office interviews and hopefully any tests are in person as well
  • colesantiago
    This is brilliant and necessary, Leetcode needs to die and a means to an end in the age of AI.For our business we don't use Leetcode, the future looks something like paid bounties and in person interviews.tinycorp from George Hotz does the very same thing of paid bounties to get hired there.The highly talented people will do this for fun, while those who aren't will self select themselves out.(0) https://tinygrad.org/#worktiny
  • raverbashing
    The problem with people that think they're too smart is that they usually lack theory of mindThis crap is transparent. A person actually solving the problem won't type that fast or not think silently about it. And even better, the LLMs usually get stuck in a snag on some of the problems, or they write weird snippets(though this might bring down the number of interviewers asking for leetcode)