Posts Categorized: AI Validation

Another Deep Dive Into AI Autosegmentation

Posted on

How about we take another deep dive into AI autosegmentation? What do you say?

We have a new C3RO dataset to work with featuring CT images for a GYN/female pelvis case. However, because the physician contours collected for this dataset were primarily for target volumes, not organs-at-risk (OAR), I say we do these sessions with a different focus. Let’s use this dataset anyway, but do AI autosegmentation overlays as well building and visualizing the results as 3D consensus maps for each submitted AI structure.

After all, it is very useful (and quite interesting) to understand how much variation we see from one AI model to another. Understanding variation is key to optimizing quality.

As before, for any vendor that “opts in” and submits their AI-generated anatomical structures, I’ll arrange a web-based interview and publish it as a podcast where we will discuss, well, whatever comes up!

Email me at canislupusllc@gmail.com if you want to submit data and participate in an interview. (Note: You can opt to do the former without the latter, if you wish, but the interviews are a lot of fun.)...

Consensus Contours and AI Autosegmentation: Video Podcasts!

Posted on

I’ve released a series of podcasts covering consensus contouring studies and interviews with AI vendors discussing AI validation and comparing “man vs. machine.”

Thanks in particular to Kevin Tierney (Radformation), Josh, Jon, and Carter (Limbus AI), and Mark Gooding (Mirada Medical) for their excellent and interesting interviews on their commercial AI engines.

The entire playlist can be found on YouTube by clicking this link.

Individual videos are as follows:

 

 ...

AI Autosegmentation Podcasts: Invitation and Info

Posted on

Dear AI Autosegmentation Vendors,

I am reaching out to any/all vendors to invite you to take part in periodic, non-funded podcasts. The forum will allow you to introduce yourselves, show your wares, and share your thoughts about how the radiation therapy industry can best validate AI autosegmentation engines/models.

The conversations will center around the imaging datasets and population of radiation oncologists’ manual segmentations that are generated by the non-commercial, not-for-profit project called “C3RO” (Contouring Collaborative for Consensus in Radiation Oncology).

Please see the details below. Email me (Ben Nelms) directly at canislupusllc@gmail.com if you are interested and would like to get on the schedule.

Thanks,

Ben Nelms


What

An unbiased podcast/interview series focusing on AI autosegmentation of human anatomy, specifically to use as an input to radiation treatment planning.

Why

Goal 1. Elevate the conversation about how best to validate AI outputs, both in the short and long term.

Goal 2. Get to know AI vendors in a casual, scientific, and “non-salesy” forum.

Goal 3. Show cool, real-time results from (1) populations of human experts and (2) AI engines.

Goal 4. Generate some pretty great ideas about how to build gold standards of human anatomy segmentation to use for both education of clinicians as well as validation of AI software.

Who

Host. Ben Nelms, Ph.D. (Canis Lupus LLC)

Guests. Representative(s) from any/all willing AI vendors and research groups who specialize in anatomy autosegmentation

Note: Conversations (likely all of them, or at least the initial ones) will be one vendor at a time. This helps ensure equal airtime and no distractions.

How

The conversation flow will be casual, with an underlying structure to cover some, if not all, of the following topics.

Intro. Your background, and what brought you to this field? (Optional) How does your group currently do validation of your AI outputs? Is it quantitative, qualitative, or both?(Required)

Data / Results. Generate contours – ideally in real time right before or in the early minutes of the meeting – for the imageset in question. Compare your automated outputs to (1) the population Isoagreement clouds and (2) various expert contours (if available), per structure....

Come Wonk with Me: Digging into C3RO Data

Posted on

The Contouring Collaborative for Consensus in Radiation Oncology (C3RO) is picking up steam. We recently reached a checkpoint after our first ~quarter year – sessions on three different body sites – and asked ourselves, how is it going?

Are our goals clear? Will achieving our goals be impactful in a tangible way? Are our methods sound? Are we providing value to the radiation oncologists participating in the program as well as the industry as a whole? Are we squeezing out the goodness of knowledge out of this mysterious fruit? And are we having any fun?

Then we asked our participants a bunch of questions, too, as an electronic survey. Two of the main messages that came out of the survey were these:

[ 1 ]  People really hunger for detailed “How I created by anatomical contours and why” explanations by invited expert panelists. Rather than try to go fast and cover lots of material and many regions of interest, slow down and talk about them, and debate them, in greater detail.

[ 2 ]  People also are interested in the population statistics, the performance of the “wisdom of the crowd,” and how it relates to potentially deriving or vetting gold standards based on consensus calculations.

Well, you spoke, and we listened! So, we are going to start doing two parallel and complementary tracks in terms of podcasts.

The first track will be hosted by radiation oncologists with radiation oncologists as panelists. The main focus here will be the “how” and “why” of experts’ contours, and hopefully some healthy discussion and negotiation of observed differences.

The second track will be led by yours truly, and we’re going to get unabashedly wonky and nerdy about it. I’m going to try to get interested AI vendors, researchers, and other people who deep-think about these things as my panelists. We’ll talk a lot about statistical variation, what we can learn from it, the challenges it poses, and how to potentially tease out great wisdom from the crowd and get to one of the holy grails of modern radiation therapy: building standard datasets against which AI autosegmentation can be measured and potentially validated....