By Ana Coradazzi
Originally published on February 2023: Slow Medicine Brasil
ChatGPT has had a tsunami effect on the digital world since it was launched to the public in November 2022. Based on the most advanced Artificial Intelligence (AI) technology known to date, it can provide information about any topic in seconds by extracting them from everything we humans have ever written online. Yes, it can create a love poem, a grocery list, a doctoral thesis, a scientific article, a song lyric, and a fiction book. More than that: it is a highly useful tool for summarizing complex topics in a way that you, me and even our grandmothers could understand. And it does all this with impressive accuracy, which has left the world awestruck. You only have to ask. As expected, its potential usefulness in health sector seems to be infinite.
It is not new that many tasks performed by doctors could be performed more quickly and more efficiently by computers. We have a huge load of activities related to medical records and filling out documents, for example, which consume a lot of time. ChatGPT could revolutionize the compilation of hospital discharge summaries, in which important information is recorded so that the healthcare team at Basic Health Units or clinics can continue caring for patients. Summaries are now a big trouble, as a good discharge summary requires time, synthesis capacity, and a great understanding of patient’s health context: all important information needs to be there, but excess information could confuse patient’s healthcare team that will receive it. The challenge is even greater if we are to be honest: those who write discharge summaries are generally the least qualified or experienced members of the team, or a person on duty who did not even participate in patient’s care, which turns the discharge summary into a “copy and paste” activity that makes no sense at all. A tool like ChatGPT could solve this problem in seconds, identifying the most significant clinical data, in chronological order, with the respective intervention and test results. That is a dream for any doctor who will receive that patient weeks later. Discharge summaries are just one random example, but there are countless situations where the breadth, speed, and efficiency of AI can be game-changing in healthcare. Its synthesis capacity could make lives of health professionals so much easier in their insane quest to stay up to date with scientific publications. Medical prescriptions could be optimized. More efficient health management strategies could be developed. So, possibilities are unimaginable, even more so looking from here, from the beginning of it all. Nevertheless, although its advantages may seem infinite, it is worth listening to the voice of common sense: even technological miracles have their limits, and we humans tend to ignore it in the name of convenience and enthusiasm.
You don’t need to be a technology genius to see two points of AI’s vulnerability, both of which are well known to all of us. The first concerns the “ingredients” that ChatGPT needs to work its magic: data. If the data we provide to AI is incorrect, incomplete, or even – why not? – manipulated, results can be useless, harmful or even catastrophic. It’s not an exaggeration. An incorrect answer to a trivial question like “What do I need to get a perfect orange cake?” can, at most, generate a bad cake, while a discharge summary based on incorrect or inaccurate data can compromise someone’s medical follow-up and health prognosis. But it could be worse: data that are manipulated in a malicious way can lead to consumption of unnecessary services or devices, placing the activity of professionals at service of interests of the system with which they work, which will not necessarily be aligned with demands of patients. The reliability and security of this data can become a problem of frightening dimensions. In other words: we will continue to depend on reliable human hands and minds to provide the data for AI, and this is quite an important thing (in fact, our responsibility becomes even greater).
The second point concerns a philosophical question: what will we do with the time saved by AI? Optimists will say we can invest it in closer interactions with our patients, more personalized care and more empathetic assistance. But, if we are to be honest and look at what we have done whenever some time is spared, we have to admit: our “free” time is rarely used for the benefit of patients. We increase the number of patients that can be seen in a day (instead of expanding the time we pass with each one of them), we request a greater number of exams (since we have time, availability and are just a click away to do so) and we turn our attention and energy to meet the demands of technology itself (doctors and nurses, for example, spend more hours recording data on the computer than being with their patients). Will we start using our time to deepen our human relationships with the people we care for? Something tells me that we won’t. It also seems unlikely to me that we will take the time needed to check what AI performed. And, yes, this is essential. We need to be very clear about our responsibility to understand, for example, that an AI-compiled discharge summary needs to be reviewed by the doctor before it is sent. In practice, this sense of responsibility has been a rare item on the market, and haste (or laziness) often makes us click on “enter” without even having read what is written. Not to mention the time saved by obtaining, in just a second, a summary of the latest scientific data on a given disease, with considerations and analyses, without having to read entire articles for ourselves (this can save us time, but leaves us at the mercy of critical evaluation by who knows who, with biases that are impossible to identify). Did you feel a chill down your spine? Me too. As always, the problem is not the technology, but with what we do with it.
Nevertheless, even if we can correct these “deviations” and make AI safer in healthcare, there is a point that goes beyond technical issues, and that perhaps we have been neglecting for longer than we should: the human needs of the health professional. I thought of them when I read that ChatGPT can write poetry and song lyrics so efficiently that you couldn’t tell them from those written by poets and composers. Perhaps reading a poem written by AI won’t make much difference to the reader, who will experience intimate emotions triggered by words carefully organized based on everything humans have produced in history. But what about the poet? A poet doesn’t just write poems for others. He writes them as a way of expressing his soul, of exposing his intimacy, and this is not only therapeutic but vital for his existence: without writing them down, he loses his essence. I wonder if we health professionals will not feel the same existential void. If we allow a technology like ChatGPT (or any other) to constitute the new model of healthcare, to the point of dismissing our insights, our personal experiences, our considerations for the people who seek us, perhaps the meaning of what we do is reduced to almost nothing. Perhaps we have fewer and fewer doctors, nurses, psychologists or other professionals actually interested in caring for human beings. Health professionals could change: instead of people who identify with alleviating suffering and promoting health, we will have people who are passionate about computer systems and flowcharts. Are our patients ready (and willing) for something like that? Will they be willing to give up the uncertainties and inconsistencies of relationships with their doctors in the name of the (almost) infallibility of machines? Or will they feel safer being able to count on a human being standing by their side, even if a fallible one? Our enormous challenge will be to rethink what we expect for our health: do we want care whose foundation is restricted to data, algorithms and documents, or will we understand it as another indispensable kind of human interaction? Can the first option coexist with the second? I honestly hope so, but I must confess that recent history of Medicine has made me believe that the first almost always prevails.
Eventually, time will tell which paths we will follow. It will be us, patients, professionals and all other human beings, who will make the choices, sensible or not, that will shape the health of the future. We will be responsible for defining what is useful to us, what is acceptable and what is intolerable. Will we, this time, use our Natural Intelligence to put technology at our service, or will we once again surrender to the convenience of Artificial Intelligence to the point of neglecting what really makes us happy human beings?
The article was translated from Portughese by André Islabão.