Tag: Machine Learning in healthcare

Machine Learning in healthcare is an AI capability. In Savana we use machine learning to generate predictive models in clinical research.

A shortcut to use researchers' time for something useful.

What AI companies hide until it's too late.

Listen, because this post is very likely to save you quite a bit of time or money or both.
 
Now you’ll see that I’m going to tell you something very honest that no AI company will tell you.

 

Just like the hotel didn’t take it upon itself to tell you that the friends’ trip to the beach wasn’t going to be what you expected.

Just like BMW is not going to be the one to tell you how much servicing costs.

Just like Thermomix will not warn you that you are not going to use it if you did not cook before.

 

Look, the thing about machine learning is that it will bring up a lot of associations between variables that you didn’t know about.

 
In theory.

 

But sometimes it won’t.

Because sometimes the variables that come out we already knew about by traditional methods.

 
This is nobody’s fault.
 
It is like if a clinical trial comes out negative. The science is in knowing that the drug does not work. It is what it is and it is not a failure.
 
It is true that we often solve this by adding a layer of genomics or histopathology, but we do not always have that.

But, but, but…

 
Something even worse can happen.

 

It may happen that you do bring out new predictor variables, but, since they are not in the literature, nobody believes you.

Neither the journal editor, nor your boss, nor the scientific committee of the conference believe you. Worse still, you don’t believe it yourself…

… until you can prove that the algorithm works by testing it on another uncontaminated group of patients.

 
That’s called validation.
 
And until you have validation, it’s all tall tales.

 

That’s why it’s important to give you a heads up.

 
Don’t do a project with AI.
 
Not unless you’re willing to go all the way.
 
Unless you have someone to guide you from model to validation.

 

If you want guidance, it’s right here.

This pack of lies will make me reach more people interested in AI.

I am preparing a new opening for my talks on AI in Medicine.

Openings are important. They are the few seconds or minutes you have to make people think “this one is going to tell something different” instead of “another boring talk like the 5 boring talks I just swallowed”.

I explain the tip for opening talks in the course on public speaking that I give free of charge to friends and top Savana clients, and with which everyone who takes it, according to what they say, comes out speaking better.

This opening tip is called “delocalization” and consists of starting with something unexpected, which you can then relate in some way to what you are going to say.

I’ve used a lot of delocalization over the years, from chessboards to Swiss cows.

The topic I’m anticipating now is going to be a lightning-fast succession about things we think are true, when they’re not. Things like:

– Napoleon was short
– Bulls like red
– Walt Disney is frozen
– The Chinese wall can be seen from space.

The idea is not mine.

 
I’ve ripped it off from Brad Templeton, my AI professor in Silicon Valley in 2014.
 
AN exceptional one who made me understand what machine learning was going to mess up in the world.

And who pushed me to set up not one, but two AI companies in healthcare.

Templeton’s classes were great.

 
Like the wowing one on autonomous cars.

One of those times when you’re listening to an expert who’s in the business of something and not one who’s in the business of something else and has looked at it to give the talk. It’s very different.

People ask me why my talks work.

 
Well, it’s very easy.
 
Because, like Brad, I’ve been doing it every day for years.
 
Full time.
 
And, by the way, I do it alongside many other clinicians with research cv’s who came to Savana full time.

I don’t know a damn thing about the algorithms behind the algorithms.

 
And don’t even talk to me about programming.

But I do know a lot about the hard work here and there so that the AI projects come out, don’t stagnate and give practical results… I know a lot about that.

 
I have some of them in 3 continents…

And if you want me to tell you about it, it’s here.

The word "democratize" is going to kill me.

And if you want to know what it’s like, here.

A journalist once again nails it as to what AI is not for.

This week a journalist asked me the same question again.
 
“So AI is good for diagnosing patients better?”
 
Here we go again.
 
What a craze journalists have for diagnosis.

They must think that our problem is diagnosis.

 
Journalists of the world, listen to what we have to tell you today.
 
Our problem is not diagnosis.
 
Stop asking about diagnosis.
 
Our problem is treat-ments.

Well…

Sometimes we forget why AI brings a great novelty to medical research.

But it’s not complicated.

It’s very easy.

Listen, I’m going to remind you…

The greatest utility of any clinical data analysis is to find the type of patient who benefits most from an intervention. From a drug, for example.

 
It’s what you want and what I want. And the patient. And the system. And everyone.

In the past we have been relatively bad at doing this, because even if we tried to come up with a mathematical formula that predicted it, it didn’t contain the variables that actually ended up leading to one outcome or another.

 
This was so because the variables were picked up by humans.
 
And we humans have very good things, like humor, compassion, or holidays.

But we have bad things, like cognitive biases and fatigue.

 
That means we don’t pick up the right variables. Not all of them at least.

We have started to solve this problem since 2013 or so, when we have the capacity to accumulate many more variables because we leave it in the hands of machines.

Which also have the ability to analyze them because they do it by methods that go beyond classical statistics.

If we do an AI project it is possible, not certain, but probable, that we will find new predictor variables that will inform us about the clusters of patients that will respond better.

 
Sometimes we don’t find the variables but we can rearrange the ones we already knew about into new formulas that in any case predict better.
 

But that’s enough for today.

If this sounds important to you, we can look at it here.

Complete the info, and a KAM will contact you ASAP:

Want to use it?:

Start with your proposed AI + RWE use case:

This is the first step for AI + RWE: