Need help?
<- Back

Comments (139)

  • ilaksh
    I think the only real reason the general public can't access this now is greed and a lack of understanding of technology. They will say that it is dangerous or something to let the general public access it because they may attempt to self-diagnose or something.But radiologists are very busy and this could help many people. Put a strong disclaimer in there. Open it up to subscriptions to everyone. Charge $40 per analysis or something. Integrate some kind of directory or referral service for human medical professionals.Anyway, I hope some non-profit organizations will see the capabilities of this model and work together to create an open dataset. That might involve recruiting volunteers to sign up before they have injuries. Or maybe just recruiting different medical providers that get waivers and give discounts on the spot. Won't be easy. But will be worth it.
  • owenpalmer
    I had an MRI on my ankle several years ago. At first glance, the doctor told me there was nothing wrong, even though I had very painful symptoms. While the visit was unproductive, I requested the MRI images on a CD, just because I was curious (I wanted to reconstruct the layers into a 3D model). After receiving the data in the mail weeks later, I was surprised to find a formal diagnosis on the CD. Apparently a better doctor had gotten around to analyzing it (they never followed up). If I hadn't requested my records, I never would have gotten a diagnosis. I had a swollen retrocalcaneal bursa. I googled the treatments, and eventually got better.I'm curious whether this AI model would have been able to detect my issue more competently than the shitty doctor.
  • daedalus_f
    The FRCR 2b examination consists of three parts, a rapid reporting component (the candidate assess around 35 x-rays in 30 minutes where the candidate is simply expected to mark the film as normal or abnormal, this is a perceptual test and is largely limited to simple fracture vs normal) alongside a viva and long cases component where the candidate reviews more complex examinations and is expected to provide a report, differential diagnosis and management plan.A quick look at the paper in the BMJ shows that the model did not sit the FRCR 2b examination as claimed, but was given a cut down mock up of the rapid reporting part of the examination invented by one of the authors.https://www.bmj.com/content/bmj/379/bmj-2022-072826.full.pdf
  • nopinsight
    This is impressive. The next step is to see how well it generalizes outside of such tests."The Fellowship of the Royal College of Radiologists (FRCR) 2B Rapids exam is considered one of the leading and toughest certifications for radiologists. Only 40-59% of human radiologists pass on their first attempt. Radiologists who re-attempt the exam within a year of passing score an average of 50.88 out of 60 (84.8%).Harrison.rad.1 scored 51.4 out of 60 (85.67%). Other competing models, including OpenAI’s GPT-4o, Microsoft’s LLaVA-Med, Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro, mostly scored below 30*, which is statistically no better than random guessing."
  • trashtester
    AI models for regular X-rays seems to be achieving high quality human level performance, which is not unexpected.But if someone is able to connect a network to the raw data outputs from CT or MR machines, one may start seeing these AI's radically outperform humans at a fraction of the cost.For CT machines, this could also be used to concentrate radiation doses into parts of the body where the uncertainty of the current state is greatest, even in real time.For instance, if using a CT machine to examine a fracture in a leg bone, one could start out with a very low dosage scan, simply to find the exact location of the bone. Then slightly higher concentrated scan of the bone in the general area, and then an even higher dosage in an area where the fracture is detected, to get a high resolution picture of the damage, and splinters etc.This could reduce the total dosage the patient is exposed to, or be used to get a higher resolution image of the damaged area than one would otherwise want to collect, or possibly to perform more scans during treatment than is currently considered worth the radiation exposure.Such machines could also be made multi modal, meaning the same machine could carry both CT, MR, ultrasound sensors (dopler + regular). Possibly even secondary sensors, such as thermal sensors, pressure sensors or even invasive types of sensors.By fusing all such inputs (+ the medical records, blood sample data etc) for the patient, such a machine may be able to build a more complete picture of a patient's conditions than even the best hospitals can provide today, and a at a fraction of the cost.Especially for diffuse issues, like back pains where information about bone damage, bloodflow (from the Doppler ultrasound), soft tissue tension/condition etc could be collected simultaneously and matched with the reported symptoms in real time to find location where nerve damage or irritation could occur.To verify findings (or to exclude such, if more than one possible explanation exists), such an AI could then suggest experiments that would confirm or exclude possibilities, including stimulating certain areas electrically, apply physical pressure or even by inserting some tiny probe to inspect the location directly.Unfortunately (or fortunately to the medical companies), while this cold lower the cost per treatment, the market for such diagnostics could grow even faster, meaning medical costs (insurance/taxes) might still go up with this.
  • smitec
    A very exciting release and I hope it stacks up in the field. I ran into their team a few times in a previous role and they were always extremely robust in their clinical validation which is often lacking in the space.I still see somewhat of a product gap in this whole area when selling into clinics but that can likely be solved with time.
  • davedx
    “AI has peaked”“AI is a bubble”We’re still scratching the surface of what’s possible. I’m hugely optimistic about the future, in a way I never was in other hype/tech cycles.
  • anon
    undefined
  • bobbiechen
    "We'd better hope we can actually replace radiologists with AI, because medical students are no longer choosing to specialize in it."- one of the speakers at a recent health+AI eventI'm wondering what others in healthcare think of this. I've been skeptical about the death of software engineering as a profession (just as spreadsheets increased the number of accountants), but neither of those jobs requires going to medical school for several years.
  • nradov
    I'm glad to see that this model uses multiple patient chart data elements beyond just images. Some earlier more naive models attempted to treat it as a pure image classification problem which isn't sufficient outside the simplest cases. Human radiologists rely heavily on other factors including patient age, sex, previous diagnoses, patient reported symptoms, etc.
  • aqme28
    This is far from the first company to try to tackle AI radiology, or even AI x-ray radiology. It's not even the first company to have a model that works on par or better than radiologists. I'm curious how they solve the commercial angle here, which seems to be the big point of failure.
  • nightski
    Is it really a foundation model if it is for a specific purpose?
  • augustinemp
    I spoke to radiologist in a customer interview yesterday. They mentioned that they would really like a tool that could zoom on a specific part of an image and explain what is happening. For extra points they would like it to be able to reference literature where similar images were shown.
  • husarcik
    As a radiology resident, it would be nice to have a tool to better organize my dictation automatically. I don't want to ever have to touch a powerscribe template again.I'd be 2x as productive if I could just speak and it auto filled my template in the correct spots.
  • isaacfrond
    From the article: Other competing models, including OpenAI’s GPT-4o, Microsoft’s LLaVA-Med, Anthropic’s Claude 3.5 Sonnet and Google’s Gemini 1.5 Pro, mostly scored below 30*, which is statistically no better than random guessing.How is chatgpt the competion? It’s mostly a text model?
  • seanvelasco
    following this, gonna integrate this with a DICOM viewer i'm developing from the ground up
  • ZahiF
    Super cool, love to see it.I recently joined [Sonio](https://sonio.ai/platform/), where we work on AI-powered prenatal ultrasound reporting and image management. Arguably, prenatal ultrasounds are some of the more challenging to get right, but we've already deployed our solution in clinics across the US and Europe.Exciting times indeed!
  • naveen99
    Xray specific model. fractures are relatively easy. Chest and abdomen xrays are hard. Very large chest xray datasets have been out for a long time (like from stanford). problem solving is done with ct, ultrasound, pet, mri, fluoroscopy, other nuclear scans.
  • Improvement
    I can't find any git link, hopefully I will look into it later.From their benchmarks it's looking like a great model that beat competition, but I will see the third party tests after they get released to determine the real performance.
  • moralestapia
    "Exclusive Dataset""We have proprietary access to extensive medical imaging data that is representative and diverse, enabling superior model training and accuracy. "Oh, I'd love to see the loicenses on that, :^).
  • infocollector
    I don't see a release? Perhaps its an internal distribution to subscribers/people? Does anyone see a download/github page for the model?
  • joelthelion
    Too bad it's not available llama-style. We'd see a lot of progress and new applications if something like that was available.
  • newyankee
    I wonder if there is any open source radiology model that can be used to test and assist real world radiologists
  • hammock
    Radiology is the best job ever. Work from home, click through pictures all day. Profit
  • anon
    undefined
  • anon
    undefined
  • ethanmitchell87
    [dead]
  • Achara
    [flagged]