AI in Healthcare:

Signals in the noise

A shortcut to use researchers' time for something useful.

What AI companies hide until it's too late.

Listen, because this post is very likely to save you quite a bit of time or money or both.
Now you’ll see that I’m going to tell you something very honest that no AI company will tell you.

Just like the hotel didn’t take it upon itself to tell you that the friends’ trip to the beach wasn’t going to be what you expected.

Just like BMW is not going to be the one to tell you how much servicing costs.

Just like Thermomix will not warn you that you are not going to use it if you did not cook before.

Look, the thing about machine learning is that it will bring up a lot of associations between variables that you didn’t know about.

In theory.

But sometimes it won’t.

Because sometimes the variables that come out we already knew about by traditional methods.

This is nobody’s fault.
It is like if a clinical trial comes out negative. The science is in knowing that the drug does not work. It is what it is and it is not a failure.
It is true that we often solve this by adding a layer of genomics or histopathology, but we do not always have that.

But, but, but…

Something even worse can happen.

It may happen that you do bring out new predictor variables, but, since they are not in the literature, nobody believes you.

Neither the journal editor, nor your boss, nor the scientific committee of the conference believe you. Worse still, you don’t believe it yourself…

… until you can prove that the algorithm works by testing it on another uncontaminated group of patients.

That’s called validation.
And until you have validation, it’s all tall tales.

That’s why it’s important to give you a heads up.

Don’t do a project with AI.
Not unless you’re willing to go all the way.
Unless you have someone to guide you from model to validation.

If you want guidance, it’s right here.

The patient in the center makes me impatient.


I have a theory.

Most of the phrases that are said in healthcare are useless

And most of them are repeated without knowing the reason why they are being repeated.

Well, I didn’t want to talk to you about that, I wanted to talk to you about something else…

About what?

Well, it turns out that I’ve heard a million times that you have to put the patient at the center of everything.

I’m not saying that that’s wrong.

In fact, I’m not saying that that’s totally true or not true.

But it’s clear to me that many people say what they say just to say, because they have heard it around.

Because then you see their websites or their emails or their plans and there is no trace of that.

So, my theory is that most people say things because most people say them. Not because they have internalized them.

And I will repeat, I’m not saying that the patient doesn’t have to be in the center or on the roof. I personally have no idea where he should be, I limit myself to see where he is, wherever he is, and study him. Or better yet, help others to study him.

Don’t expect Patient Reported Outcomes from me, because I don’t have them. That’s not in the medical records, which is what I exploit with surgical precision

But rather Doctor Reported Outcomes. The most specific ones, in fact.

That said, in this week’s lesson I’m going to show you how to put your study at the center of it all.

The patient?

No, your study.

At least that’s the way I’ve been doing it for years and I’m not doing too badly, honestly.

I mean, if you want to learn deep stuff and not precooked phrases, look here.

This pack of lies will make me reach more people interested in AI.

I am preparing a new opening for my talks on AI in Medicine.

Openings are important. They are the few seconds or minutes you have to make people think “this one is going to tell something different” instead of “another boring talk like the 5 boring talks I just swallowed”.

I explain the tip for opening talks in the course on public speaking that I give free of charge to friends and top Savana clients, and with which everyone who takes it, according to what they say, comes out speaking better.

This opening tip is called “delocalization” and consists of starting with something unexpected, which you can then relate in some way to what you are going to say.

I’ve used a lot of delocalization over the years, from chessboards to Swiss cows.

The topic I’m anticipating now is going to be a lightning-fast succession about things we think are true, when they’re not. Things like:

– Napoleon was short
– Bulls like red
– Walt Disney is frozen
– The Chinese wall can be seen from space.

The idea is not mine.

I’ve ripped it off from Brad Templeton, my AI professor in Silicon Valley in 2014.
AN exceptional one who made me understand what machine learning was going to mess up in the world.


And who pushed me to set up not one, but two AI companies in healthcare.


Templeton’s classes were great.

Like the wowing one on autonomous cars.


One of those times when you’re listening to an expert who’s in the business of something and not one who’s in the business of something else and has looked at it to give the talk. It’s very different.

People ask me why my talks work.

Well, it’s very easy.
Because, like Brad, I’ve been doing it every day for years.
Full time.
And, by the way, I do it alongside many other clinicians with research cv’s who came to Savana full time.


I don’t know a damn thing about the algorithms behind the algorithms.

And don’t even talk to me about programming.


But I do know a lot about the hard work here and there so that the AI projects come out, don’t stagnate and give practical results… I know a lot about that.

I have some of them in 3 continents…


And if you want me to tell you about it, it’s here.

The word "democratize" is going to kill me.

If you don’t know the Law of the Mirror, now when I tell you about it you’re going to want to buy me a lot of beers.

It has changed many people’s lives. And I am not exaggerating.

Just to know this would be worth 10 years of reading me.

But be careful, because your mind will tend to reject it. It’s called “cognitive dissonance”.

It’s normal, but when you rest it, you’ll see that everything starts to fit.

Ready? Here it goes…

“What bothers you most is precisely what you do or who you are.”

Money bothers you… there´s something you did not solve with the money issue (maybe you would like to have it)
Your neighbor’s music bothers you more than usual… maybe you have missed a few parties in your life.
You really dislike tattoos… you know, maybe you should try a small one on your ankle.

When you don’t have complexes, traumas, frustrations, things that don’t suit you simply trigger indiference. But they don’t bother you.

So why this airport book psychology lesson?

Because one thing is clear.

The smart ones are different from us fools in that they don’t use buzzwords.
Those who really have authority and knowledge do not need to say “FYI” or “asap”

It annoys me very very much when people say “add value.”

And, oh surprise, the thing is, I often say them.

If you go to my Linkedin you will see me in 2014 talking about how AI was going to “democratize” Medicine.

I deserve the worst of the punishment.


The Internet has some very bad things, like empty words, but it has something great, which is disintermediation.

Plane tickets without intermediaries.
Buying stocks without intermediaries.
Newspapers without intermediaries.

So there’s a group of people who especially enjoy working with AI-generated data.

They are the ones who have long wanted to get rid of the middlemen.

Because for the first time in their career they no longer have to go to a third party who buys data from the health services and then resells it to them.

No, for the first time they get it directly.

And at Savana we put the technology in place so they can do that.

And if you want to know what it’s like, here.

A journalist once again nails it as to what AI is not for.

This week a journalist asked me the same question again.
“So AI is good for diagnosing patients better?”
Here we go again.
What a craze journalists have for diagnosis.

They must think that our problem is diagnosis.

Journalists of the world, listen to what we have to tell you today.
Our problem is not diagnosis.
Stop asking about diagnosis.
Our problem is treat-ments.


Sometimes we forget why AI brings a great novelty to medical research.

But it’s not complicated.

It’s very easy.

Listen, I’m going to remind you…

The greatest utility of any clinical data analysis is to find the type of patient who benefits most from an intervention. From a drug, for example.

It’s what you want and what I want. And the patient. And the system. And everyone.

In the past we have been relatively bad at doing this, because even if we tried to come up with a mathematical formula that predicted it, it didn’t contain the variables that actually ended up leading to one outcome or another.

This was so because the variables were picked up by humans.
And we humans have very good things, like humor, compassion, or holidays.

But we have bad things, like cognitive biases and fatigue.

That means we don’t pick up the right variables. Not all of them at least.

We have started to solve this problem since 2013 or so, when we have the capacity to accumulate many more variables because we leave it in the hands of machines.

Which also have the ability to analyze them because they do it by methods that go beyond classical statistics.

If we do an AI project it is possible, not certain, but probable, that we will find new predictor variables that will inform us about the clusters of patients that will respond better.

Sometimes we don’t find the variables but we can rearrange the ones we already knew about into new formulas that in any case predict better.

But that’s enough for today.

If this sounds important to you, we can look at it here.

The question I get asked the most about machine learning.

How could I get trained in machine learning and AI?
That’s the question.
And the answer is a bit annoying.
I was a neurologist in the hospital. It was pretty cool, actually. I liked it. I didn’t cope well with working with people’s lives and deaths without sleep, but I still liked it.
But I was bitten by the bug to “set something up”.
And, lucky me, I did the best thing I could have done: a course.
A program, they call it.
As I had absolutely no idea about technology, finance, marketing or human resources… as I didn’t know what a start-up was…
it was good for me to dive into the deep end for a few months.
I came out refreshed and ready.
But, but, but
It would have been useless if I hadn’t had a specific project in hand.
It would have been just window shopping without a wallet.
And I call that wasting time.
The problem with courses is that they are often an excuse for mental laziness.
A way of throwing the ball forward.
“I’ll do the course and see if I can come up with something.”

But it doesn’t work that way.
First you decide you want to make lasagna and then you look for the recipe.
Because we all know what happens when you open a cookbook to see what to cook.
There are a thousand courses on machine learning, there are some applied to healthcare.
I know which ones are good and which ones you don’t need to know how to program to do them.
But my recommendation is very clear: none
Do not take any AI course.
Not until you have a project in hand.
And if you want to design one, let me know.

The number 1 responsible for failures with AI is not who you think.

A hair salon can hold a powerful lesson about AI research projects.

Look, the other day I went to Carlos to get a haircut.

I go to him even though he is exactly 7 times more expensive than the one in my miserable neighborhood.

It’s unbecoming of me to make such a splurge, being a stingy by the book as I am.

And I don’t do it because he is much better technically.

I do it because we understand each other well.

He keeps quiet and listens.

And I tell him precisely.

It’s a team effort.

Because, and here comes the lesson, I don’t tell him the what. I tell him what for.

I don’t tell him, “I want it to be weathered” or “heal my ends”. I wouldn’t tell him even if I knew what that meant.

I tell him “I have a wedding” or “I’m going to the beach”.

And he acts accordingly. Professional.

Get me one thing….

I’ve seen every possible way to fail with AI projects.

I have been so wrong that it has been inevitable to learn how to refine the method and now I know what to do to make the whole chain work.

From the research question to the last paper in the last committee.

And I can tell you that number 1, not number 2 or 3, number 1 of the reasons for projects that go wrong is to NOT say what the hell you want it for.

If what you want is to negotiate a reimbursement price, don’t tell me your aim is to save the humpback whale.

Don’t do that, because then it’s going to go wrong.
But if you tell me what you are looking for or what your boss wants, our probability of success multiplies exponentially.

You talk, I’m listening.

The news about RWE that got 40,000 views yet 3 trolls didn't like it.

I´ll give you the news right away.

But first I must unveil to you one of my least democratic and most politically incorrect techniques: deleting comments on Linkedin that get in the way of my plan to dominate the world with AI.

And now I can tell you the news and why I delete comments from sad trolls.

A few weeks ago I posted on Linkedin, about how the spanish Data Protection Agency has accepted a proposal from the “Pharma Industry Association”, by which “virtual” clinical trials can be done without consent.

In other words, patients’ information can be reused without consent.

But beware, provided that and only if:

– It is for a research purpose
– The data is anonymized
– It is non-interventional.

The post went very well, although it was a link to a post of another person, without much rigor in the explanation and even with some inaccuracy.

But the thing excelled in interest. About 40,000 views, which is a lot.
And many positive comments.

Because people care and are interested in this.

As I said, it’s great news.

Although, of course, there were a few trolls who said things like:

“No consent, how awful”
“And who will be responsible when the world explodes and blows up”
Or things like that.

And of course, I deleted them.

First, because on my Linkedin wall, or whatever it’s called, I can do whatever I want.

But above all, because those comments could make someone passing by who, unlike you and me, doesn’t know the subject, think that “without consent” always means something bad.

As if doing research, as long as anonymity is respected, is not a good thing.

As if, in fact, the unethical thing to do is NOT to do research.

I tell you this in case someone comes out of the woodwork with this.

So that you can count on me if I can help you to defend reason against…well the unreasonable.

Have a nice day.

If after reading this you don't know how to act, I don't know anymore.

This past week a man and a lady from a large pharma company came to see me at the office. He was nice, she was nice. He was a doctor, she was from marketing.

They took a taxi and came to the office to tell me that, regarding the project I had sold them some time ago, they were “very happy”.

You can imagine the headlines…

… they had taken our work as an oral communication at the European Conference

… the results were also in agreement with other new studies carried out by the traditional way

…. the prevalence had come out as it should and that was going to give them great benefits worldwide

They wanted to make a video telling everything.

They said all that. All that came out of their mouth.

And I, who am quite autistic and quite dumb to read psychology and emotions, was left wondering:

How can it possibly be that a customer goes to his provider’s office to tell him how happy he is?

This never happens, at least not according to the normal rules of life.

Or maybe

Or maybeeeeeee

What happens is that AI opens up possibilities big enough to provoke reactions like this.

Reactions which, by using normal methods, would be impossible to see.

But they are real.

And why do I tell you this story that could be a lie, although it is true?

Well, because you might be able to replicate it.

Because maybe you also need to look at the prevalence of B among patients with A.

Because that’s where your drug stands out.

That’s why I’m telling you about it.

And if you want, we can talk about it here.

Never let them say to you the things that they say to me.

Look, I’m having a hard time appreciating the Metaverse, to be honest.

I’m sure it will come and impose itself and give a thousand utilities.

And surely what happens is that without realizing it, I get older and I get left behind.

But it has not started very well.

The other day I saw a pharma company exhibiting on Linkedin how they had their first meeting there.

And it was basically them, the people in the meeting, represented by cartoons. 


I wondered if they were aware of the stupidity they were doing and they didn’t care because “you have to be part of it” or if there really are people who are not able to distinguish a technology that helps from one that does not.

It happens to me when they invite me to give a talk and they want me to talk about, you know, “a little bit of everything”.

A little bit about social networks, a little bit about telemedicine, a little bit about artificial intelligence.


We need to let everyone know that these are things that are not at the same level.

That teletext is one thing and email is another.

That useless Google glasses are one thing, and sunglasses are another.

That a yogurt maker is one thing and a microwave is another.

That technology is not good just for the sake of it or always.

It has to prove it.

And it has to be solid.

That’s why when you go to tell your boss, your manager or your digital director, you have to explain it well.

Explaining that machine learning is mathematics supported by computation, which helps us to make predictions about diseases.

And how it is at the same level of seriousness as classical statistics or MRI.

That this is not for monkeying around and throwing a jug of milk over your head on a social network.

And if you want us to help you tell a serious story, here.

This post is a bit longer than average but it's really worth it.

When Javier Bardem was asked what his favorite movie was, he said “the first few minutes of Up”.

Those first minutes are masterful.

They tell what happens to you as soon as you lose your focus.

What happens with life, I mean.

What was going to be and how shitty it has been…

That it passes quickly.

It passes and you haven’t lived it because you were postponing it.

For a few years I was teaching innovation in a program for the pharma industry and at the end I played this excerpt from Up.

I wanted to tell them that nothing really mattered.

Business, master’s degree programs, the pharmaceutical industry.

The only thing that matters is living.

When my son gets a little older, I will play Up for him.

And I will tell him: “Son, you go and live”.

Because there will be problems anyway.


Recently a client looked me in the eye and said:

“If I were you, what I would feel is shame.”

The man was angry.

A good man and a good professional.

He was partly right.

And partly not.

He was right that we had not delivered what he wanted.

Because when we did that project, we were young and immature.

We had poorly designed the study population and consequently the predictive model had not identified interesting predictive variables.

But he wasn’t right because you can’t be ashamed of your mistakes when you are doing new things.

And I tell you one thing.

This customer is the kind you want to have.

The ones you don’t want are the ones who don’t pick up the phone.

But he called me to tell me off because he wanted to keep working with me.

Why? To help me?

No, and he didn’t have to.

He did it because he knew we can give him something he can’t get by other means.


When you jump into doing a project in AI, a lot of things are going to go wrong.

At first.

Then it gets better.

But for a while they will tell you things.

They will make you doubt.

Because they will be pissed off that they were not the ones who took the plunge.

Don’t doubt that’s the reason.

But it turns out that doing is the only way to learn.

And one thing is clear: this machine learning stuff, sooner or later, you have to learn it.

It turns out that we have learned so much, between successes and mistakes, that we will make sure that at least you don’t get wrong what most people are still getting wrong.

We’ve got 3 or 4 years on them.

To those who know AI or science, but not both.

And sometimes they know neither.

By the way, months later, we turned in the project again and the one who received it, who was a colleague of the previous one literally said, “I’m going to invite you for Gin tonics”.

If you want to discover AI and feel proud, not ashamed, you can do it here.

AI is a bit of a superpower, although it's dangerous to say it out loud.

If Superman is so smart, then why does he wear his underpants on the outside?

A silly joke.

But a funny one to me.

And I think the superhero we like the most says a lot about us.

It could also be that you don’t care about superheroes at all.

In fact, I don’t care much about them.

Maybe because I am Spanish.

My partner, who is an actress and succeeded in tv, did it with a show about losers.

She says that this is how it works. That in Spain, movies about losers succeed. While in the United States, it is superhero movies that rock.

Because that’s how low our self-esteem is sometimes in Spain.

One of losers, like Don Quixote.

When I was a kid, I always wanted to dress up as Superman.

But they never bought me the costume.

Who knows how much that affected my self-esteem.

Where would I have gotten to with a cape and my underpants on the outside.


I don’t know where you want to go, but chances are you’re not going to get there.

At least not if you don’t keep your kryptonite in a lead box.

Here is usual kryptonite that prevents you from getting into AI projects:

– Sure I need informed consent and privacy laws won’t let me. Bullshit

– The clinical question I have is so obvious that I’m sure someone is already making some similar model

Almost never

– It’s such a complicated issue that it’s not worth it to me to learn it.

Driving or being a mother is tougher.

– It’s a bubble; traditional statistics is what works and what is proven.

I wish you good luck in life


We Spanish people as we are in Savana, we were able to invent something before the Americans or the British.

The generation of RWE at multisite level using Natural Language Processing on Medical Records, is a Spanish invention.

Like the broom, the submarine or epidural anaesthesia.

It could be a lie.

But it isn´t.

We can’t make you see through walls (but we can help you to see around the corners)

 We can make you see things in the data that you cannot see with normal statistics.

If you want us to help you start a project, click here.

Enrol in a project