In a world increasingly dominated by Artificial Intelligence (AI), it is important, sometimes, to reflect on the value of traditional ways of working and of creating knowledge.

Tools like ChatGPT, arguably the best known and most widely used one today, have gained traction because, in a simplistic manner, we have been able to process more data and quicker than before. However, if you are familiar with statistical analysis, you will know that the quality of the data set directly impacts the quality of the results. And we’ve all seen the quality of the data available online these days… The days of rigorous fact-checking, editorial review, and careful proofreading are long gone. It’s now common to find articles, even in once-respected publications, that contain unverified claims, lack of proper sources, or are riddle with grammatical errors. This shift reflects a broader trend: quantity has overtaken quality when it comes to the publication of data.
The problem is that many people today blindly trust tools like ChatGPT but fail to question the veracity of the information they provide. However, we read day in and day out articles that go unchallenged and, over time, are treated as facts. I am tired of seeing false claims about AI-generated deepfake videos, or content designed solely to improve engagement with an increasingly uncritical audience. Misinformation spreads, gets embedded in popular culture, and eventually becomes the accepted truth. Exaggeration sells, while facts, science, and truth seem to matter less and less.
At the same time, history has shown us that breakthroughs are often the result of serendipity, alternative thinking, and human curiosity. We like to say, “one size doesn’t fit all,” yet we often expect AI to do exactly that, apply the same algorithms to every situation.
Take, for instance, one of the most important medical discoveries in history: penicillin. Nearly a century ago, Scottish biologist Alexander Fleming was studying staphylococci when he decided to take a well-deserved break from his work. When he returned, he noticed that a blue-green mold had grown in one of his Petri dishes, which had been left open accidentally. The mold had killed off the surrounding bacteria. This accidental discovery led to the isolation and development of penicillin, revolutionizing the treatment of bacterial infections worldwide.
Or consider Wilhelm Roentgen, the German physicist who discovered X-rays in the late 19th century. While experimenting with a cathode ray tube, he noticed that a nearby fluorescent screen would glow even when the tube was covered. Curious, he placed various objects in front of the tube, but the effect remained. Finally, he put his hand in front of it… and saw an image of his bones projected onto the screen. This led to the first ever X-ray image, transforming medicine forever.
We often justify the use of AI and automation saying that they will reduce human error. However, without human error and the curiosity to investigate the unexpected, these and many other discoveries might never have happened.
In an industry that aims to alleviate patient suffering, we need to keep our feet firmly on the ground and be able to discern when AI can help analyze data and provide useful outcomes, and when it is just a toy, which can be dangerous in the wrong hands.
I am a great believer in automation, and in using the tools at our disposal. But I also believe in critical and analytical thinking. Without it, science will stall. We can create fantastic algorithms to learn more about drugs and how these interact with the proteins in the body, and how these can heal. But we need to generate robust training data sets that allow the algorithms to be refined. To do this, we must continue generating large amounts of robust, well-validated experimental data.
In the fast-paced world of pharmaceuticals, traditional experiments and analytical thinking still hold immense value, probably far greater than ever before.