Author: pjmarco

Unknown reason why AI research fails.

We have no freaking idea, but we imagine that if we have a restaurant and cook well, we only have the beginning.

We also need to manage personnel and suppliers well.

And we suppose that many people will fail because they only think about the first part.

That’s what a child would think.

The second part is what someone adult thinks about and often suffers when it’s too late.

This is what often happens to us in calls and meetings.

When there are people who believe that the key to doing good Real World Evidence with AI is to apply very good mathematical models to quality data.

That’s true.

But it’s insufficient.

As people who have no freaking idea about anything and talk about everything, we think that it’s forgotten, really, forgotten all the time, that there’s a key piece.

It’s ugly, unpleasant. And no one wants to talk about it until you’re in the shit.

It’s precisely how to get healthcare centers to give access to their data.

And we’re not talking about ethics or law.

We assume that the data is anonymized, approved by the Ethics and Legal Committee.

We’re talking about the other stuff.

The IT guy who has no time.

The politician who dislikes the pharmaceutical industry and stops the project.

The hospital that doesn’t even know where its databases are.

The Clinical Director who says that the Committee has to sign it, the Committee that says that the IT guy should sign it first, and the IT guy who answers that, of course, he’ll sign it as soon as the Clinical Director signs it.

The labyrinth of data.

We work for a company.

It’s called Savana.

And we’re experts, very experts, in navigating that turbulent river.

The thing is, we have 26 people dedicated solely to this task and have worked with:

200 hospitals 11 countries 32 different EMR systems

To conduct research studies with AI using a team and technology that is prepared to ethically reuse health data, here.

Utility of AI that no one thinks about.

We’re going to tell you something.

We’ll tell you and then you can do whatever you want with the information.

You’ll do it anyway, so whatever.

We believe that one of the most amazing things about AI is that it will save a lot of expensive tests.

This is important because when people think of AI, they think of improving diagnosis.

They think of an AI that can detect a polyp or stratify risks. Or they think about risk stratification.

An AI that tells us that these patients here are more likely to decompensate and then we call them.

All of that is great.

But imagine an AI that tells you which patients are more likely to test positive for a genetic test.

A test that is necessary to prescribe medication, for example.

A test that is expensive and not everyone can afford.

Well, we talked to a pharmaceutical company that had this problem.

We spoke to their headquarters in the US, and they said, “Hey, you Machine Learning geniuses. Let’s see if you can really do it.”

We said, “But keep in mind that no algorithm in the world can correctly predict all positive or negative results. It’s about finding more positives for the same amount of money, which is what you and the oncologists want.”

They said ok. That’s fine.

And we don’t know, maybe you want to know if it worked.

If the AI pre-genetic test is worth it. If it’s something that can be applied to all tests in this situation.

But ultimately, what does it matter if we tell you?

If you’re not going to do anything.

If you’re going to keep thinking that AI is “the future.”

If you only like innovation when it involves putting on virtual reality glasses and going on a fake roller coaster.

But not when there are possibilities to change your job with real and cutting-edge technology.

You would love to, but your boss, Legal, or your parents won’t let you.

Anyway, we wish you a great day.

PS. AI pre-molecular tests in 2023, here.

PS 2. In 2024? In the future? No, in 2023. Here.

Recruit patients for trials using AI.

A few days ago, Ignacio H. Medrano had the best interview of his life.

It was conducted by a very young doctor named Mustafa.

Who is British.

You can find it on Ignacio H. Medrano’s LinkedIn profile if you ever want to waste time on the internet instead of doing what you should be doing, like reading a classic, making love, or playing with your children.

Ignacio tried to explain to him that one of the worst things about doctors and technology is that they tend to be quite bipolar.

They are either all in or all out.

Either everything is terrible and going horribly wrong.

Or they love it and want it all, unlimited and forever, with licenses for life, or else they hate you for being an exploitative company.

Ignacio has been in this field for years and still hasn’t learned from engineers, who are capable of improving a little bit each day.


Kaizen, which means “a little better every day” in Japanese.

But the AI tools that are already coming and will be coming in the future will be like this.

Gradually getting better, until one day, without even realizing it, Ignacio won’t stop using them.

This happened with GPS. They just don’t remember.

The other day, at one of Savana’s hospitals, doctors were writing their clinical records differently, thinking ahead so that when they use Savana to reuse them, it will work better.

We feel like we’re already getting there.

Closing the circle.

Doctors changing their way of writing so that AI can interpret it better, and in turn, it benefits them to locate patients.

So if you want to see a demo of our unique tool, Savana Manager v4, which can build you a database with the clinical variables you want, from medical records, all from the comfort of your own home and in convenient monthly payments, it’s here.

PS. From home.

PS 2 Unique in the world.

PS 3 You create the variables you want and find patients.

PS 4 For what? For example, to recruit patients for clinical trials with a very specific leukemia, which you would never find using ICD codes.

But why is it actually better to conduct research with AI?

My father used to print emails, underline them, and file them in blue folders.

He must have been going through a digital transformation.

Because most people transform things into digital to do the same thing they used to do by hand.

But spending more money and having a few bad times.

In other words, they believe that going digital means doing the same thing as before, but wired.

That’s because they think of the word digital: diiiiiigiiiiitaaaaaal.

But they don’t think of the other: transformation. Trannnnssssformation. Transformaaaaaation.

Thinking differently. Acting differently. Living differently.

Stopping wasting hours and hours.

Avoiding repetition. Seeing beyond.

You canĀ“t imagien how many people you introduce to a technology that can read thousands of variables (all variables) of a patient.

And they ask you about survival at 3 months. And if they have high blood pressure.

OK. They haven’t understood.

That having thousands of variables allows you to group phenotypes in new ways.

Cluster clinical responses based on reality.

Predict who will respond to A and B. Be precise.

How is it done?

I’ll explain it to you in 30 minutes.

If you write here.

Ethics Committee encourages investigating without consent.

The day I realized I had ADHD, I started crying.

I cried and cried like a child and it took me a while to stop.

It was the relief of tears.

It was my first day of rotation in the Neuropediatrics clinic.

The pediatrician started asking the parents questions to see if that child had it.

And I realized that I answered positively to all the questions.

It turns out that during College I hadn’t noticed or studied it or anything.

So that day, all of a sudden, I understood why I had lost countless keys, wallets, flights, and girlfriends in my life.

I understood why in my damn life I had been able to listen to an entire lesson.

And why I had to study walking around the room like one of those crazy zoo tigers, making a superhuman effort.

But look.

We ADHD people have some good things.

Besides being funny and creative by force, we are great detectors of a good communicator.

If someone catches my attention, he must be amazing.

Like the other day Federico de Montalvo.

A law professor who left me stunned, gobsmacked, and stupefied with his overpowering Jesuit rhetoric.

His clarity of ideas.

His lucidity.

And a thesis that was very clear and impeccably defended:

To investigate without informed consent whenever the data is pseudonymized (it doesn’t even have to be anonymized) is not ethical; not doing so is an attack against the duty as a society.

I had an intellectual orgasm with Federico.

He told me later that he had heard a lot about me, so I’m going to ask him to do things with him.

Write down his name because there are few like him.

Although maybe you already knew him.

Apparently, he got out a lot with COVID because he is a member of the Ethical Commitee at Unesco.

And me without having a damn idea who he was.

That’s because ADHD people don’t watch TV.

We can’t.

By the way.

To do “big data” studies with millions of pseudonymized clinical records without their consent, but for the purpose of increasing scientific knowledge, it’s here.

Complete the info, and a KAM will contact you ASAP:

Want to use it?:

Start with your proposed AI + RWE use case:

This is the first step for AI + RWE: