Tag: Real-World Evidence generation

Generation of RWE with the most realistic clinical data through the application of AI capabilities matched with scientific capabilities.

What healthcare AI course to take?

Ignacio H. Medrano, founder and Chief Medical Officer:

 

I have a friend named Nuria Gago.

She called me a while ago because she was writing a book, and there was a character with Alzheimer’s in it, and she wanted me to tell her about it.

I told her.

And she included some literal phrases, which is something curious to read later when you see it.

She won the Planeta Prize, the most prestigious literature one in the hispanic world.

 

So I asked her the typical stupid question, whose answer I already suspected:

– Nurigago, what writing course do you recommend to improve my writing?

– None. Just start writing.

 

Always that response…

Ask a good fisherman what fishing course he recommends. You’ll see what he says.

Or ask Michael Jordan what basketball course he recommends.

Or ask Dabiz Muñoz, the best cook in the world, what cooking course he recommends. You’ll see…

Or ask the best dad you know what course to be a father he recommends.

 

I’m often asked about a course on AI and healthcare.

Sometimes I recommend one out of obligation, but deep down, I think: none. Start a project.

 

And if not, ask Elon Musk which MBA he recommends.

Well, I think the idea is clear.

 

Courses are good for silencing consciences and procrastinating.

For learning, do.

 

And for an AI project, you can do it here.

 

P.S. I think very short courses, of a few hours, to provide context, to know if you like something and where to start, that’s what really makes sense.

Why AI is going to kill Google.

Ignacio H. Medrano, founder and Chief Medical Officer:

 

Google is going to die.
I think.
Most likely, you also thought about it.
 
I have been thinking about it for years.
How many more years will Google last?
I didn’t have an answer.
 
But I got it within 2.5 minutes of using ChatGPT.
So I bought Microsoft stocks.
They have appreciated by 60% to this day.
I bought a small amount, so it doesn’t change much for me.
But I don’t do these things to make money, but rather to make my accurate (and inaccurate) predictions tangible.
That’s why I bought Bitcoin in 2014.
 
But wait, I’m getting off track.
 
I was saying that Google is going to die, and not because OpenAI’s GPT-3 model is more powerful than Google’s BERT.
But because whoever imposes their AI, this new way of searching for information will end Google’s model based on clicks.
That’s why its end is approaching, either at the hands of another or by cannibalizing its own product.
 
How many smart people will be there?
People who know technological disruption perfectly well.
They have read all the books about Kodak, Motorola, and such.
 
And still, boom. Six feet under.
And do you know why I think it is?
Because the ability to withstand disruption does not correlate with intelligence. Not with knowledge.
It correlates with humility.
It’s an attitude.
 
That’s why when AI sweeps away the old way of generating clinical evidence, when normal data collection, normal statistical analyses no longer matter… the biggest won’t survive.
Not the smartest.
Not those with a higher H-index.
 
The humblest will survive.
Those who accept reality and quickly adapt to it.
In fact, those who have already started doing so.
 
And if you’re reading these emails, it’s very likely that you are one of them.

 

The error of patenting an AI algorithm.

Ignacio H. Medrano, founder and Chief Medical Officer:

 


When I was a resident, I had a very smart and funny colleague (who later coincidentally worked at Savana for a while) who talked about the “circle of death.” 
She referred to iatrogenesis. 
 
To patients who were relatively well, but due to following “the protocol,” we would admit them or send them to the ICU, triggering a catastrophe. 
 
I thought about it recently when a relative of mine was found to have asymptomatic metastases from a kidney primary tumor he had 20 years ago! 
So, I informed my brother: 
“He has an incidental finding. Since he’s 80 years old, the only sensible thing would be to leave him be. But as Medicine is what it is, and the healthcare system is what it is, they will undoubtedly give him immunotherapy. It is highly likely (I said ‘highly likely’) that he will experience serious adverse effects, complications will arise, and that may be what kills him. It’s called the ‘circle of death.’ That’s just the way it is, and we can’t do anything about it (this family member would never prioritize my opinion over that of his oncologist).” 
 
He had a heart attack after the first dose. 
It could have not happened, but it’s a matter of probability. 
 
And I have no clue about Oncology or Cardiology. 
But I believe in the data. 
And I think about the missing data. 
 
The countless number of cases like this that will happen every day. 
And which are not documented anywhere. 
Simply because certain administrations haven’t taken action. 
 
The problem is not technological. 
It’s not about the budget. 
It’s about taking action. 
 
I enjoy writing for people who like to take action. 
Those who respond to these emails saying, “I’ve been wanting to connect A and B for a while. Can I do it with Savana?” 
 

What to do with AI-intruders.

Listen to what Matilde Sánchez Conde, an Infectious Disease physician tells me in an email:

It bothers me that I think I’ve been a little ahead of all this, wanting to do things like defining the fragility model for more than 10 years now…it’s now the bandwagon everyone’s jumping on, and despite having a clear understanding of it from the beginning, I’ve never been able to get funding for anything in this line. At first, nobody understood it, and now everyone’s an expert.

 

That’s right, dear internet friend.

Matilde is suffering… (drumroll…) the innovator’s dilemma.

There’s a very famous book on the subject.

 

But this book can be summarized quickly.

 

The idea is that when you come with something very new, nobody will pay attention.

And, as technology is exponential, when it explodes, it will do so suddenly.

And then everyone is into it, talking about it, and acting like an expert.

 

And that poor innovator, ignored for a decade, feels frustrated and powerless because their ahead-of-their-time initiative is diluted in the excited mass.

 

If you’re reading these emails, it’s probably happened to you.

And it may have happened to you with AI.

 

Well, I’ll tell you something. And you know I’m not one to sugarcoat things.

It’s not a big deal.

Nothing at all. Just relax.

 

Because those who are jumping on the AI trend, but were in the Metaverse yesterday and will be in genomics-at-home tomorrow (which is probably the next big thing), are harmless.

Because their manifest lack of depth won’t take them anywhere.

To do an AI project that’s worthwhile, it’s not enough to copy and paste ideas.

You have to understand what’s going on behind the scenes.

Why it’s happening.

 

Why one algorithm is more relevant than another.

Or how often it needs to be retrained.

Or if the question is clustering or stratifying the risk.

 

You don’t acquire that knowledge by reading news.

You learn it over the years, by proposing studies and making mistakes.

That will protect you from any intruder.

 

Just ask three questions, three, and you’ll know if the person behind it really knows why machine learning is completely changing medical research.

And that, precisely that, is what we’re going to take away when we do a project together here.

 

What are the 3 questions? Here.

Unknown reason why AI research fails.

We have no freaking idea, but we imagine that if we have a restaurant and cook well, we only have the beginning.

We also need to manage personnel and suppliers well.

And we suppose that many people will fail because they only think about the first part.

That’s what a child would think.

The second part is what someone adult thinks about and often suffers when it’s too late.

This is what often happens to us in calls and meetings.

When there are people who believe that the key to doing good Real World Evidence with AI is to apply very good mathematical models to quality data.

That’s true.

But it’s insufficient.

As people who have no freaking idea about anything and talk about everything, we think that it’s forgotten, really, forgotten all the time, that there’s a key piece.

It’s ugly, unpleasant. And no one wants to talk about it until you’re in the shit.

It’s precisely how to get healthcare centers to give access to their data.

And we’re not talking about ethics or law.

We assume that the data is anonymized, approved by the Ethics and Legal Committee.

We’re talking about the other stuff.

The IT guy who has no time.

The politician who dislikes the pharmaceutical industry and stops the project.

The hospital that doesn’t even know where its databases are.

The Clinical Director who says that the Committee has to sign it, the Committee that says that the IT guy should sign it first, and the IT guy who answers that, of course, he’ll sign it as soon as the Clinical Director signs it.

The labyrinth of data.

We work for a company.

It’s called Savana.

And we’re experts, very experts, in navigating that turbulent river.

The thing is, we have 26 people dedicated solely to this task and have worked with:

200 hospitals 11 countries 32 different EMR systems

To conduct research studies with AI using a team and technology that is prepared to ethically reuse health data, here.

Want to use it?:

Start with your proposed AI + RWE use case:

This is the first step for AI + RWE: