Transcript
Thank you, Tony. Good morning, everybody. This is the topic I find the most fun now in my career and hope to share some of that enthusiasm with you. Obviously, this morning, we're talking about software, and now we're gonna talk about software as it comes to making a more intelligent field of radiosurgery, and Brainlab has been a big supporter of this, generating a lot of time and money and software support to a national American registry. But other people have been involved in this.
So where are we right now? Well, what do we really wanna know? Yesterday, we talked about, you know, what randomized trials do we want. We want information that's relevant to your practice and your patients, it's reliable, reproducible, and is bias-free. And a randomized trial is good for number one and two, sorry, two and three, but it's not actually great for number one because the trial might be about lung cancer in 50-year-old people, and you've got a 75-year-old woman with renal cell carcinoma. It's not the same thing. The trial really doesn't apply, and you're stuck, you know, making a decision.
So what have we done for the last, you know, 80 years? We collect data, mostly retrospective, hard copy files, digital files. This is our file room in Pittsburgh. We had all the films, not digitized. We would send a fellow into the room for a month and put food under the door and say, "Come out after a month and tell us what you've found." And we were pretty good at it, but it was labor-intensive and not really great.
So how often do we do randomized trials? We don't do very many actually, because they're so labor-intensive, and in the field of neurosurgery, all of neurosurgery, did about three or four or five a year, that's it. So we're not learning very fast. We're not quitting dogma very quickly. So we had a registry in Pittsburgh over 25 years. We collected data on 13,000 patients, seemed impressive. We wrote a lot of articles. It wasn't very good though, as I'll tell you, and it was really small data. And this is the evolution between small data and what we call big data.
So small data is what we do now. It's driven by hypotheses on how things work. We say, well, you know, maybe the cochlea is relevant for acoustic neuroma hearing. So let's go back and look at the cochlea and see what happens. And there's a P-value that says, you know, less than 4.2 gray is good for hearing. And then the whole world changes, and we all try to keep the cochlea below 4.2 gray. Big data's totally different. It's understanding by actual abundant data. And Amazon's made a killing by recommending things to you. You go on Amazon to buy something and at the bottom is all those other forms of music that you might like that they know based upon the world around you.
So knowing why is what we've been focusing on. We always wanna know why in medicine. Why do we lose hearing? Now, that's probably the hair cells of the cochlea. Who cares really? I'm gonna be dead before we actually answer those questions. So knowing why is pleasant, but actually not very important. Knowing what is crucial, and knowing what is the change in big data. What is about correlation? So knowing why. Why do patients lose hearing? Well, is the tumor stretching the nerve? Maybe. Radiation effects on the cochlea? People think so. I'll show you that's not right. Radiation effects on the cochlear nerve. We ignore that because we can't see it.
Aging, we ignore that. Is the cochlear nerve ischemic? We ignore that. Late scar tissue from radiation. We can't measure it, so we ignore that. Doesn't really matter. We can only know what correlates with hearing loss, not why. So the big data movement is really powerful. We hear about it all the time. There's a journal called "Big Data." I'm on the editorial board of that journal.
And what we see right now in our journals is a lot of articles looking at national databases collected through insurance companies and other types of things. And there's rapid increase in publication with this kind of information. But I'll show you that it's big data, but it's not great data because it's not really the data we want. It's the data that somebody else collected usually for billing. It's got age in it, maybe the tumor type, but it doesn't include like why the patient got surgery or radiosurgery or how sick they were. So the data fields are really limited. Doesn't tell us why. It's easy to mine, so people do it, but it's not great.
So we need prospective, pragmatic, real-world data collection systems, high fidelity for research or quality care. That's what people have been doing. So we're collecting data for one purpose, let's say radiosurgery, and then we can reuse it for other purposes. That's what this field really is. But we need a standardized language. Here's a listing of all sorts of ways to describe the platelet count. There's like 50 different ways, and even survival. Do you report it in weeks versus months versus years? You know, you need the same terminology. So the data elements have to be standardized, and that requires a lot of work, and we don't all agree as to what data we really want.
So datafying. We need to know how to measure and to know to record what we measure. We need the right set of tools, and we need desire. This is a big part here. We need to wanna do this. We talk a lot about the fact that this would be great, and I'll show you, this is hard. So we need to really want to do it.
So what's a solution while we build these registries and we click data prospectively? And so when I look at an article on 50 meningiomas, it's kind of sad. You know, there's 1.2 million people have had the Gamma Knife, about a million people have had LINAC radiosurgery. Why are we writing articles on 100 meningiomas? Why don't we... What about 20,000 meningiomas? We did these huge studies in blood pressure. Why aren't we doing them in medicine? So we need standardization and participation, but auditing, oversight, independence, have to make sure the data was entered properly. There's a lot of issues here.
So here's a system that we built with Elekta years ago, and I've now got 3,000 patients in my own local prospective registry, and the Brainlab-funded NNS registry in the United States is growing as I'll show you as well. So I log in. We lose every day, 19 categories, 47 disorders, everything I ever wanted to know so I never have to go back to a chart. And so here's, you know, all the breakdowns of everything that we do. This is vestibular schwannoma. So there's demographics, disease, treatment, and follow-up. What I'm showing you is text, no images. It's a weak element of this. We don't have images. The Brainlab system has images. I'll show you.
So what do we wanna know about vestibular schwannoma? It's all the grading scores, it's hearing, it's the audiogram, it's tentative scores. It's everything you would care about. From a treatment perspective, it's the volumes, the sizes, the dose symmetry, the dose planning issues, even complications the day of the procedure. And when they come in for follow-up in the clinic, they get a standard letter, but the data gets entered, all the scores again, the hearing tests, and all of this. Takes about six minutes at the beginning. Follow-up takes about 45 seconds. Sounds fast. It's not fast. That's long. That is difficult. That is a problem.
Everybody says, "Well, why can't you just pull it right from the electronic record?" Natural language processing, forget, it doesn't work. It's too complicated. The records are not built for this. So who does the work? We do the work. I do the work. And if I don't have a fellow, I do it. But fellow, they do it or I have a student trained to do it. We do it every day. Nobody leaves until all the data is in, and then we export it. This is the powerful stuff. Internal reports. We have some software to mine it, and we can transfer to national studies. You see a lot of these international radiosurgery studies. When I get invited, it's easy. I just mine the spreadsheet. I can send it tomorrow. So it's very powerful.
So here's a dashboard. Everybody wants these lovely dashboards. They're not that great, to be honest. This is simplification of the data. I can go in this and look at it. Within a second, here's 3,000 people, where the patients come from, indications. Here's my local American map. The hospital loves this because they see where all the patients come from, and they say, "Oh, you haven't treated anybody from Montana lately," you know, this kind of thing.
So this is hearing, and I show this to my patients. I log on and show them the live data. And I say, "Well, you have grade one hearing. Here's my grade one patients. And here's what happened to them." You can show them this. This is breakdown of mets. This is AVM dashboard looking at embolization, bleeding, and so on.
This is superimposing audiograms over top, over time. And then we can do statistics on how audiograms change at different hearing frequencies. No one's ever been able to do that. You know what we do with audiograms? It's complex data in audiograms, and we take it, and we simplify it to one number, the speech discrimination score, and we ignore all the rest of the data, which is a classic error.
So how do we talk to our patients? This is what we do. We quote what we remember. It's what the world does. We quote the actual literature. That's slightly better, not just what I remember, but it's what's real. We quote their own outcomes data. Yeah, I think I got a 6% complication rate. You know, I remember this. Or they show their own published data. I do that a lot. I print out an article that I wrote five years ago and take it home and read it. That was 5 years ago from patients managed 10 years ago, or I can show them the actual current data. This is the cool part.
So here's cochlear dose and hearing again. And this is what I said. Well, I wrote an article 10 years ago about cochlear dose and can print it out and give it to them. I don't believe cochlear dose is actually that important, but it's published. Maybe it's true. Here's the actual data. So this is live. This is, like, from last week. So if you have a speech discrimination score of 90 to 100, this is only people with that data. This is my outcomes based upon cochlear dose. So you can see up to...everybody's between 2 and 6 gray, so not always less than 4. And you can see it's a random distribution of outcome.
Most of the time we're keeping hearing. You can see most of it's staying at the high between 90, over 100. And some of it's dropping down. Cochlear dose is really irrelevant in this distribution. Here's tumor volume. So if you've got a volume less than 1 cc, the small acoustics, you can see the cluster from my outcomes are good between 90 and 100 for small volume tumors. This is an argument against observation. So if you watch...if the patients are watching their tumors get bigger, the rain is gonna happen. The data is gonna get worse.
Now, if you wait and the hearing drops a little bit, so now you're out of the 90 to 100 range, you're down to 50 to 90, I'm not so good anymore. My data is just falling down, and it's a random distribution because of all those factors we never measured. It's actually got nothing to do with cochlear dose. It's a random distribution here with other things that are important. Probably the most important thing is how long has the tumor been there in the patient, which you cannot know because we don't know that.
Here's target volume, lower hearing again. Look how I'm not very good, but smaller volume tumors, again, do better. So the point is don't wait if you're really interested in hearing. Now, look at age. I saw a patient who was 35 years old last week, saw everybody in New York. But I point...I showed him my actual data in 30 to 45-year-old people with high-level hearing. And you can see that the numbers aren't high because we don't treat a lot of young people, but there it is. We're keeping most of the hearing at least out to 5 years in this group, different from 46 to 60, different from over 60. We're still pretty good, but it's starting to fall apart there.
This is the actual data. There's no key values. I'm not showing you any statistics. I'm just showing you the actual distribution of outcomes live with the patient in the office. This is you. So pre-radiosurgery hearing is likely more important than cochlear dose that just showed you. And if hearing is good, cochlear dose may also not really matter that much either. And that's exactly what people are now publishing. If you have high-level hearing, Class 1 hearing, you're just gonna do better than not. So it's an argument against the observation.
So what about live on the fly survival curves? If you're interested in metastasis science, for example, it used to be a lot of effort to go make these Kaplan-Meier curves. They're now live every day. I could just go into this, and there's automatically created. This is filtered. This is comparing lung to breast, to melanoma, survival curves, breast being better, lung and melanoma actually equal here. And this is just filtering live. This is discovery... Used to be we'd sit in a room and say, "What do we wanna study? Well, let's do a study on lung cancer." Now, we go to the data and say, "Wow, you know, young people with lung cancer do just as bad as old people with lung cancer."
It's interesting. You know, why would that be? It's because if you've been smoking for 25 years and you're 40, you're in trouble versus older-onset lung cancer. Here's data discovery with trigeminal neuralgia, looking at pain outcomes. The interesting thing is this was generated by a medical student who was just interested in the topic. Took one day just to get this information. I say to my physicist, "Mine the data for trigeminal neuralgia," and here it comes. And so typical age range, 50 to 60 being the most common.
Most...I was surprised actually that two to one, our patients are female to male. I wasn't really aware of that. And this is follow-up, good results, 1 to 3(a) based upon age. Interestingly, less than 30 years old, we've got some patients, they've all done well. I was surprised to see that. And the overall rate of pain control is about 85% in this group. This is separating out MS from non-MS. The point is it's just simple, easy data discovery. Female patients doing better, that took virtually no time. And then this is what you would want to explore further.
Now, this is the Brainlab American registry that Brainlab has supported, and you can see it's now starting to grow. This is multicenter. So 2,500 follow-ups, almost 4,000 radiosurgery procedures, 23 medical centers, over 3,000 patients. So it's starting to get number. The goal of this was to get to 30,000 patients over a relatively short period of time. It's been slow mostly because of legal issues for hospitals to communicate with each other and transmit data. But it's growing. The average monthly accrual is about 80 patients. And this is somewhat...it's not exponential yet, but it's getting bigger.
The dashboard of the Quentry registry, similar to the dashboard I had showed you for the Elekta registry. This is now starting to use the data. So, with patients, a lot of patients with brain metastasis, one thing would be what's the dose-related to the outcome? So could you create a system, a dose optimizer? And the issue is yes. You could start to plug in a few factors which in what diseases did a dose lead to a better outcome. It's kind of cool. And here's lung cancer typing in age between 60 and 90, looking at a certain tumor volume running the registry and then printing out what might be best for you. These are just intelligent tools for support.
This is some other work looking at the imaging side of things. And this is like, where do brain metastasis even go in the brain? I mean, the dose plan knows where they are spatially. Well, where do they tend to end up? So does that matter? Well, maybe. It may focus you in terms of where you're looking or maybe, you know, there's different ways to think about this. So this can be thresholded and just to show you where this affects the brain. For example, how often do brain metastases go to the hippocampus? If you're interested in hippocampal sparing. Just skip some of that stuff.
So this is the end. You know, we lived in this sexy era for the last 10 years of what we call evidence-based medicine, which was randomized trials, trying to get people to do randomized trials or at least level two studies, which was a discontinuous process. The study is done. And then you think about it. Everybody quotes the Patchell study, New England Journal, you know, most cited study in neuro-oncology, you know, brain metastasis.
Do you know how many breast cancer patients are in that study? Two, two. It's essentially a small lung cancer study. It changed the world. You had breast cancer, brain mets, you know, you needed this, got two patients in that study. In a randomized trial, you assume surgical skill is uniform. You assume that everybody's the same and so on, whoever did the study is the same as you, and compliance determines quality. This is different from the future. The future is science of practice, which is well-designed registries of what we actually do to patients. It's iterative. One of the variables is us. We do things differently, and that's okay.
And this encourages innovation and its present tense, and the outcome determines quality, but it's labor-intensive. It requires champions, requires corporate support, local support, buy-in. It's expensive. The result should be scalable to other things beyond what we do, but it's tremendously powerful and very, very exciting. And I think the future of our meetings are gonna be more data coming out of this, discovering what we do than actually thinking about looking at the past. Thank you.
So where are we right now? Well, what do we really wanna know? Yesterday, we talked about, you know, what randomized trials do we want. We want information that's relevant to your practice and your patients, it's reliable, reproducible, and is bias-free. And a randomized trial is good for number one and two, sorry, two and three, but it's not actually great for number one because the trial might be about lung cancer in 50-year-old people, and you've got a 75-year-old woman with renal cell carcinoma. It's not the same thing. The trial really doesn't apply, and you're stuck, you know, making a decision.
So what have we done for the last, you know, 80 years? We collect data, mostly retrospective, hard copy files, digital files. This is our file room in Pittsburgh. We had all the films, not digitized. We would send a fellow into the room for a month and put food under the door and say, "Come out after a month and tell us what you've found." And we were pretty good at it, but it was labor-intensive and not really great.
So how often do we do randomized trials? We don't do very many actually, because they're so labor-intensive, and in the field of neurosurgery, all of neurosurgery, did about three or four or five a year, that's it. So we're not learning very fast. We're not quitting dogma very quickly. So we had a registry in Pittsburgh over 25 years. We collected data on 13,000 patients, seemed impressive. We wrote a lot of articles. It wasn't very good though, as I'll tell you, and it was really small data. And this is the evolution between small data and what we call big data.
So small data is what we do now. It's driven by hypotheses on how things work. We say, well, you know, maybe the cochlea is relevant for acoustic neuroma hearing. So let's go back and look at the cochlea and see what happens. And there's a P-value that says, you know, less than 4.2 gray is good for hearing. And then the whole world changes, and we all try to keep the cochlea below 4.2 gray. Big data's totally different. It's understanding by actual abundant data. And Amazon's made a killing by recommending things to you. You go on Amazon to buy something and at the bottom is all those other forms of music that you might like that they know based upon the world around you.
So knowing why is what we've been focusing on. We always wanna know why in medicine. Why do we lose hearing? Now, that's probably the hair cells of the cochlea. Who cares really? I'm gonna be dead before we actually answer those questions. So knowing why is pleasant, but actually not very important. Knowing what is crucial, and knowing what is the change in big data. What is about correlation? So knowing why. Why do patients lose hearing? Well, is the tumor stretching the nerve? Maybe. Radiation effects on the cochlea? People think so. I'll show you that's not right. Radiation effects on the cochlear nerve. We ignore that because we can't see it.
Aging, we ignore that. Is the cochlear nerve ischemic? We ignore that. Late scar tissue from radiation. We can't measure it, so we ignore that. Doesn't really matter. We can only know what correlates with hearing loss, not why. So the big data movement is really powerful. We hear about it all the time. There's a journal called "Big Data." I'm on the editorial board of that journal.
And what we see right now in our journals is a lot of articles looking at national databases collected through insurance companies and other types of things. And there's rapid increase in publication with this kind of information. But I'll show you that it's big data, but it's not great data because it's not really the data we want. It's the data that somebody else collected usually for billing. It's got age in it, maybe the tumor type, but it doesn't include like why the patient got surgery or radiosurgery or how sick they were. So the data fields are really limited. Doesn't tell us why. It's easy to mine, so people do it, but it's not great.
So we need prospective, pragmatic, real-world data collection systems, high fidelity for research or quality care. That's what people have been doing. So we're collecting data for one purpose, let's say radiosurgery, and then we can reuse it for other purposes. That's what this field really is. But we need a standardized language. Here's a listing of all sorts of ways to describe the platelet count. There's like 50 different ways, and even survival. Do you report it in weeks versus months versus years? You know, you need the same terminology. So the data elements have to be standardized, and that requires a lot of work, and we don't all agree as to what data we really want.
So datafying. We need to know how to measure and to know to record what we measure. We need the right set of tools, and we need desire. This is a big part here. We need to wanna do this. We talk a lot about the fact that this would be great, and I'll show you, this is hard. So we need to really want to do it.
So what's a solution while we build these registries and we click data prospectively? And so when I look at an article on 50 meningiomas, it's kind of sad. You know, there's 1.2 million people have had the Gamma Knife, about a million people have had LINAC radiosurgery. Why are we writing articles on 100 meningiomas? Why don't we... What about 20,000 meningiomas? We did these huge studies in blood pressure. Why aren't we doing them in medicine? So we need standardization and participation, but auditing, oversight, independence, have to make sure the data was entered properly. There's a lot of issues here.
So here's a system that we built with Elekta years ago, and I've now got 3,000 patients in my own local prospective registry, and the Brainlab-funded NNS registry in the United States is growing as I'll show you as well. So I log in. We lose every day, 19 categories, 47 disorders, everything I ever wanted to know so I never have to go back to a chart. And so here's, you know, all the breakdowns of everything that we do. This is vestibular schwannoma. So there's demographics, disease, treatment, and follow-up. What I'm showing you is text, no images. It's a weak element of this. We don't have images. The Brainlab system has images. I'll show you.
So what do we wanna know about vestibular schwannoma? It's all the grading scores, it's hearing, it's the audiogram, it's tentative scores. It's everything you would care about. From a treatment perspective, it's the volumes, the sizes, the dose symmetry, the dose planning issues, even complications the day of the procedure. And when they come in for follow-up in the clinic, they get a standard letter, but the data gets entered, all the scores again, the hearing tests, and all of this. Takes about six minutes at the beginning. Follow-up takes about 45 seconds. Sounds fast. It's not fast. That's long. That is difficult. That is a problem.
Everybody says, "Well, why can't you just pull it right from the electronic record?" Natural language processing, forget, it doesn't work. It's too complicated. The records are not built for this. So who does the work? We do the work. I do the work. And if I don't have a fellow, I do it. But fellow, they do it or I have a student trained to do it. We do it every day. Nobody leaves until all the data is in, and then we export it. This is the powerful stuff. Internal reports. We have some software to mine it, and we can transfer to national studies. You see a lot of these international radiosurgery studies. When I get invited, it's easy. I just mine the spreadsheet. I can send it tomorrow. So it's very powerful.
So here's a dashboard. Everybody wants these lovely dashboards. They're not that great, to be honest. This is simplification of the data. I can go in this and look at it. Within a second, here's 3,000 people, where the patients come from, indications. Here's my local American map. The hospital loves this because they see where all the patients come from, and they say, "Oh, you haven't treated anybody from Montana lately," you know, this kind of thing.
So this is hearing, and I show this to my patients. I log on and show them the live data. And I say, "Well, you have grade one hearing. Here's my grade one patients. And here's what happened to them." You can show them this. This is breakdown of mets. This is AVM dashboard looking at embolization, bleeding, and so on.
This is superimposing audiograms over top, over time. And then we can do statistics on how audiograms change at different hearing frequencies. No one's ever been able to do that. You know what we do with audiograms? It's complex data in audiograms, and we take it, and we simplify it to one number, the speech discrimination score, and we ignore all the rest of the data, which is a classic error.
So how do we talk to our patients? This is what we do. We quote what we remember. It's what the world does. We quote the actual literature. That's slightly better, not just what I remember, but it's what's real. We quote their own outcomes data. Yeah, I think I got a 6% complication rate. You know, I remember this. Or they show their own published data. I do that a lot. I print out an article that I wrote five years ago and take it home and read it. That was 5 years ago from patients managed 10 years ago, or I can show them the actual current data. This is the cool part.
So here's cochlear dose and hearing again. And this is what I said. Well, I wrote an article 10 years ago about cochlear dose and can print it out and give it to them. I don't believe cochlear dose is actually that important, but it's published. Maybe it's true. Here's the actual data. So this is live. This is, like, from last week. So if you have a speech discrimination score of 90 to 100, this is only people with that data. This is my outcomes based upon cochlear dose. So you can see up to...everybody's between 2 and 6 gray, so not always less than 4. And you can see it's a random distribution of outcome.
Most of the time we're keeping hearing. You can see most of it's staying at the high between 90, over 100. And some of it's dropping down. Cochlear dose is really irrelevant in this distribution. Here's tumor volume. So if you've got a volume less than 1 cc, the small acoustics, you can see the cluster from my outcomes are good between 90 and 100 for small volume tumors. This is an argument against observation. So if you watch...if the patients are watching their tumors get bigger, the rain is gonna happen. The data is gonna get worse.
Now, if you wait and the hearing drops a little bit, so now you're out of the 90 to 100 range, you're down to 50 to 90, I'm not so good anymore. My data is just falling down, and it's a random distribution because of all those factors we never measured. It's actually got nothing to do with cochlear dose. It's a random distribution here with other things that are important. Probably the most important thing is how long has the tumor been there in the patient, which you cannot know because we don't know that.
Here's target volume, lower hearing again. Look how I'm not very good, but smaller volume tumors, again, do better. So the point is don't wait if you're really interested in hearing. Now, look at age. I saw a patient who was 35 years old last week, saw everybody in New York. But I point...I showed him my actual data in 30 to 45-year-old people with high-level hearing. And you can see that the numbers aren't high because we don't treat a lot of young people, but there it is. We're keeping most of the hearing at least out to 5 years in this group, different from 46 to 60, different from over 60. We're still pretty good, but it's starting to fall apart there.
This is the actual data. There's no key values. I'm not showing you any statistics. I'm just showing you the actual distribution of outcomes live with the patient in the office. This is you. So pre-radiosurgery hearing is likely more important than cochlear dose that just showed you. And if hearing is good, cochlear dose may also not really matter that much either. And that's exactly what people are now publishing. If you have high-level hearing, Class 1 hearing, you're just gonna do better than not. So it's an argument against the observation.
So what about live on the fly survival curves? If you're interested in metastasis science, for example, it used to be a lot of effort to go make these Kaplan-Meier curves. They're now live every day. I could just go into this, and there's automatically created. This is filtered. This is comparing lung to breast, to melanoma, survival curves, breast being better, lung and melanoma actually equal here. And this is just filtering live. This is discovery... Used to be we'd sit in a room and say, "What do we wanna study? Well, let's do a study on lung cancer." Now, we go to the data and say, "Wow, you know, young people with lung cancer do just as bad as old people with lung cancer."
It's interesting. You know, why would that be? It's because if you've been smoking for 25 years and you're 40, you're in trouble versus older-onset lung cancer. Here's data discovery with trigeminal neuralgia, looking at pain outcomes. The interesting thing is this was generated by a medical student who was just interested in the topic. Took one day just to get this information. I say to my physicist, "Mine the data for trigeminal neuralgia," and here it comes. And so typical age range, 50 to 60 being the most common.
Most...I was surprised actually that two to one, our patients are female to male. I wasn't really aware of that. And this is follow-up, good results, 1 to 3(a) based upon age. Interestingly, less than 30 years old, we've got some patients, they've all done well. I was surprised to see that. And the overall rate of pain control is about 85% in this group. This is separating out MS from non-MS. The point is it's just simple, easy data discovery. Female patients doing better, that took virtually no time. And then this is what you would want to explore further.
Now, this is the Brainlab American registry that Brainlab has supported, and you can see it's now starting to grow. This is multicenter. So 2,500 follow-ups, almost 4,000 radiosurgery procedures, 23 medical centers, over 3,000 patients. So it's starting to get number. The goal of this was to get to 30,000 patients over a relatively short period of time. It's been slow mostly because of legal issues for hospitals to communicate with each other and transmit data. But it's growing. The average monthly accrual is about 80 patients. And this is somewhat...it's not exponential yet, but it's getting bigger.
The dashboard of the Quentry registry, similar to the dashboard I had showed you for the Elekta registry. This is now starting to use the data. So, with patients, a lot of patients with brain metastasis, one thing would be what's the dose-related to the outcome? So could you create a system, a dose optimizer? And the issue is yes. You could start to plug in a few factors which in what diseases did a dose lead to a better outcome. It's kind of cool. And here's lung cancer typing in age between 60 and 90, looking at a certain tumor volume running the registry and then printing out what might be best for you. These are just intelligent tools for support.
This is some other work looking at the imaging side of things. And this is like, where do brain metastasis even go in the brain? I mean, the dose plan knows where they are spatially. Well, where do they tend to end up? So does that matter? Well, maybe. It may focus you in terms of where you're looking or maybe, you know, there's different ways to think about this. So this can be thresholded and just to show you where this affects the brain. For example, how often do brain metastases go to the hippocampus? If you're interested in hippocampal sparing. Just skip some of that stuff.
So this is the end. You know, we lived in this sexy era for the last 10 years of what we call evidence-based medicine, which was randomized trials, trying to get people to do randomized trials or at least level two studies, which was a discontinuous process. The study is done. And then you think about it. Everybody quotes the Patchell study, New England Journal, you know, most cited study in neuro-oncology, you know, brain metastasis.
Do you know how many breast cancer patients are in that study? Two, two. It's essentially a small lung cancer study. It changed the world. You had breast cancer, brain mets, you know, you needed this, got two patients in that study. In a randomized trial, you assume surgical skill is uniform. You assume that everybody's the same and so on, whoever did the study is the same as you, and compliance determines quality. This is different from the future. The future is science of practice, which is well-designed registries of what we actually do to patients. It's iterative. One of the variables is us. We do things differently, and that's okay.
And this encourages innovation and its present tense, and the outcome determines quality, but it's labor-intensive. It requires champions, requires corporate support, local support, buy-in. It's expensive. The result should be scalable to other things beyond what we do, but it's tremendously powerful and very, very exciting. And I think the future of our meetings are gonna be more data coming out of this, discovering what we do than actually thinking about looking at the past. Thank you.