Need help?
<- Back

Comments (21)

  • ZeidJ
    We built something similar to this: a Pokemon-style MMORPG where agents and players collaborate to catch “Clawemon” and battle other agents.We posted it online and surprisingly got a lot of negative feedback from users mentioning they would never spend valuable tokens on playing a game.Our intention was to create an interaction experiment to see how agents interact with each other and with their human companions. We ended up making a pretty fun game in the process, which we're still working on.Bring your own inference as a potential future of gaming does not seem too far off.For anyone interested here is the HN post: https://news.ycombinator.com/item?id=47849872
  • justindz
    What a great lunch read! I've been weekend-warrioring a terminal-based CRPG for a bit myself. I was recently exploring ways to use agents to help with balance testing, which is a real scale problem for solo indie dev. So far, all I've created is a fight simulator: essentially, have the current player state (stats, effects, gear, companions, etc.) do this fight, simulated, X number of times using one of the currently-implemented GOAP personalities and report how often it wins, loses, average end turn, stuff like that.I hadn't really thought about trying to create a harness for agents to play the full game interactively. I'd love to explore this. If you don't mind, here are a few questions:1) Correct to assume that I probably need a text-only harness even though my game is text-based already because I do make use of menu selections made via arrow-key-and-enter interactions?2) Do you have prompt recommendations for the type of feedback you have found to be useful? I would guess in your case, the objectives of the game are more clear than an open-world RPG. What dead ends have you run into? Maybe a variety of approaches would be good? One agent tries to fight everything. Another focuses on gaining and completing as many quests as possible?3) How bad is the token burn doing this? Any optimization strategies you've employed?
  • fishtoaster
    I landed on something similar for my own game, though it's been pretty tricky.I'm building a physics-based 2d game involving slingshotting around planets. The realtime nature of it has meant that it's nearly impossible for the AI to test using a browser mcp. It'll take one screenshot, then another, and in the intervening time the player shot off the map and into deep space.Instead I gave it both a code-level api to step forward and backward the physics engine and a browser-based, `window.game` api to do it via a browser mcp console. The former helps it work out physics bugs and the latter helps it test animation and UI issues.It's still not great. I keep occasionally getting "I tested it and it works perfectly!" as I stare at the mcp'd browser with the player stuck clipped halfway into a planet. I think, if anything, I need to lean harder into this approach: building really solid tooling for the AI to inspect every aspect of state. I would kill for a turn-based game like OP XD
  • StephenAshmore
    I've been doing something similar on my own weekend game! I've got two games in rust I'm working on, a simple one in tauri and a more traditional 2D game. For both, I added a CLI that allows me or AI to play the game and test. It hooks into the actual game state just like here as another way to "render" the game. I think this is pretty similar to end-to-end testing strategies, but with the current state of AI you can have really interesting testing while you're building something. I appreciate starting a fresh AI with no context on the game and giving it just instructions on how to use the CLI. It's an extra pair of eyes for rubber-ducking.
  • squeegmeister
    I recently added E2E tests in my game too. One of the benefits is that I can have my agent verify its own work by asking it write a test and look at screenshots. Which means I can say “I’m going to bed, implement this and verify it with e2e tests” and it gets further along than it used to
  • jongalloway2
    I've been doing this lately, building a Godot game with Copilot CLI. I'm using Godot MCP Pro which can automate interactions and screenshots, and have the whole game script in a markdown doc. I was happily surprised when I asked for a walkthrough and it all just worked, found and fixed some regressions while I was sleeping.
  • Jabrov
    I can’t wait until the distant future where strategy games will have actually good and interesting AI that can communicate and reason
  • chrisweekly
    This is awesome. Thanks for sharing! The text-based renderer reminds me of playing Larn on my dad's VT100 when I was a child (early 80s).
  • shnippi
    This is sick, thanks for sharing! We've been working on very similar things for the past 2 years. We also started with a text-only representation, but sadly quickly realized that only a small subset of games work well with this.So we went down a rabbit hole and decided to do everything purely based on pixels and OS inputs.We're currently only live for mobile but happy to give you early access to nunu ai for PC if interested. Would love to see how we compare!
  • zoetaka38
    Built something similar for E2E web testing recently. A few observations from running an agentic test harness in production:1. The single biggest jump in test quality came from giving the agent BOTH source code analysis AND live browser snapshots, not either alone. With code-only the agent hallucinates selectors; with browser-only it misses project conventions. Two MCP servers feeding the same agent — one local file-read, one Playwright in-process — was the architecture that worked.2. For the browser snapshot tool, returning the raw DOM ate tens of thousands of tokens per call and the agent struggled to navigate it. Swapping to accessibility-tree refs (e1, e2, ...) cut token usage by ~10x and made the agent reliably target the right elements.3. We avoided Docker-based MCP servers in production (we run on ECS Fargate). The in-process SDK MCP pattern (create_sdk_mcp_server + @tool decorator) keeps the browser handle in scope of the tool definition, which let us attach page.on('console') listeners and have the agent read them via a separate tool. Hard to do that across stdio process boundaries.For game testing specifically — your text-renderer detail is interesting because it sidesteps the visual-grounding problem (how does the agent verify what it's seeing?). Curious how you'd extend this to a 2D/3D rendered game where the screen state isn't easily textualized.
  • haunter
    Is there an AI which can "solve" the Path of Exile 1/2 passive skill tree yet?
  • empath75
    I hooked up an MCP server to a MUD and got some pretty amazing results, including Claude Code agents in separate windows chatting with each other and cooperating on building out a new section.
  • Modified3019
    My earliest desire for real AI was so it could control my dumb fucking harvester in C&C95.
  • alphainfo
    [flagged]
  • builderminkyu
    [dead]