<- Back
Comments (36)
- chrisweeklyThis looks pretty cool, at least at first glance. I think "traditional web testing" means different things to different people. Last year, the Netflix engineering team published "SafeTest"[1] an interesting hybrid / superset of unit and e2e testing. Have you guys (Magnitude devs) considered incorporating any of their ideas?1. https://netflixtechblog.com/introducing-safetest-a-novel-app...
- NitpickLawyer> The idea is the planner builds up a general plan which the executor runs. We can save this plan and re-run it with only the executor for quick, cheap, and consistent runs. When something goes wrong, it can kick back out to the planner agent and re-adjust the test.I've been recently thinking about testing/qa w/ VLMs + LLMs, one area that I haven't seen explored (but should 100% be feasible) is to have the first run be LLM + VLM, and then have the LLM(s?) write repeatable "cheap" tests w/ traditional libraries (playwright, puppeteer, etc). On every run you do the "cheap" traditional checks, if any fail go with the LLM + VLM again and see what broke, only fail the test if both fail. Makes sense?
- SparkyMcUnicornThis is pretty much exactly what I was going to build. It's missing a few things, so I'll either be contributing or forking this in the future.I'll need a way to extract data as part of the tests, like screenshots and page content. This will allow supplementing the tests with non-magnitude features, as well as add things that are a bit more deterministic. Assert that the added todo item exactly matches what was used as input data, screenshot diffs when the planner fallback came into play, execution log data, etc.This isn't currently possible from what I can see in the docs, but maybe I'm wrong?It'd also be ideal if it had an LLM-free executor mode to reduce costs and increase speed (caching outputs, or maybe use accessibility tree instead of VLM), and also fit requirements when the planner should not automatically kick in.
- tobrInteresting! My first concern is - isn’t this the ultimate non-deterministic test? In practice, does it seem flaky?
- grbshI know moondream is cheap / fast and can run locally, but is it good enough? In my experience testing things like Computer Use, anything but the large LLMs has been so unreliable as to be unworkable. But maybe you guys are doing something special to make it work well in concert?
- aoeusnth1Why not make the strong model compile a non-ai-driven test execution plan using selectors / events? Is Moondream that good?
- dimal> Pure vision instead of error prone "set-of-marks" system (the colorful boxes you see in browser-use for example)One benefit not using pure vision is that it's a strong signal to developers to make pages accessible. This would let them off the hook.Perhaps testing both paths separately would be more appropriate. I could imagine a different AI agent attempting to navigate the page through accessibility landmarks. Or even different agents that simulate different types of disabilities.
- jcmontxDoes it only work for node projects? Can I run it against a Staging environment without mixing it with my project?
- badmonsterHow does Magnitude differentiate between the planner and executor LLM roles, and how customizable are these components for specific test flows?
- pandemic_regionBang me sideways, "AI-native" is a thing now? What does that even mean?
- sergiomatteiHi, this looks great! Any plans to support Azure OpenAI as a backend?