Hey Siri, Do I Have Cancer?

 By Charlotte Simard-Zakaïb as part of the Summer School of the Cyberjustice Laboratory in 2019.


One in two Canadians will be diagnosed with cancer during their lifetime, while one in four Canadians will die from it. Early detection of cancer will make it easier to treat and cure, but for most types of cancer, there is no screening test. And then, if the doctor was able to predict the patient's life expectancy or prognosis before treatment, different strategies could be chosen or prioritized. Researchers are still trying to determine whether medical imaging tests using current technology to diagnose and plan treatments could lead to earlier diagnosis of the disease. Artificial intelligence (hereinafter referred to as "AI") may well be the solution.

AI is growing rapidly, moving from its experimental phase to the implementation phase in several fields, including medicine. In recent years, phenomenal advances in learning algorithms, deep learning techniques and image recognition have made it possible to make giant strides in AI. More specifically, radiology, being a specialty largely built around the complex processing and interpretation of medical images, is a prime candidate for the adoption of these techniques. Thus, the idea of radiomics, the exhaustive analysis of a large number of medical images in order to extract the characteristics of the different cancers, emerged. Radiomics provides an innovative approach to solve the challenges of precision medicine, and how it can be performed without being invasive (without the need for a biopsy), while being fast. Precision medicine is a treatment strategy that allows radiologists to access more information, in a shorter time frame, in order to make a decision about the targeted molecular agent to be administered based on the genetic mutations related to the type of cancer, rather than on the organs affected.

However, in order to be able to develop these technologies and ensure growth in AI research in the medical sector, it is necessary to address the challenges of computer data modeling, as well as the issue of liability, both in the medical and legal spheres.

Data ownership and ethics issues are well known challenges, particularly in the medical context. Radiomics and precision medicine are obviously driven by the integrated analysis of multiple types of data and their extensive collection. Historically, the data sets available in radiation oncology are more limited than those available in other specialties, thus creating a barrier to AI development. Growth in the domain is positively correlated with massive data sharing to bring it together to create a more complete pool, raising a concern about the confidentiality of this information in a highly interconnected domain. In addition, one of the challenges that encompasses data collection is the fear that algorithms may reflect human bias in decision-making. Indeed, a recent program has been introduced in the legal sector to facilitate the judges' decision on the conditional release of accused. This "predictive justice" has demonstrated a disturbing propensity for discrimination. Similarly, an algorithm designed to predict the results of genetic discoveries related to cancer may be biased if there are no genetic studies to this effect in certain populations. AI can be misused; just think of the Volkswagen scandal and its nitrogen oxide emissions. By analogy, we must ensure that the developers of these algorithms do not give in to the same malicious temptations in order to program systems to guide clinical diagnostics to generate profits (such as prescribing certain drugs), without resulting in better care. This highlights the urgency of adopting legislation and regulations around AI, particularly in complex areas of care that require a high degree of precision in the diagnosis of the disease and in the choice of the most appropriate treatment.

In addition, as AI evolves and surpasses human skills in some tasks, the issue of responsibility will become inevitable. Would it be medically justified to choose a treatment plan that is not supported by an AI-based algorithm, particularly considering that the number of constraints to be taken into account could exceed the number that humans are humanly able to assess? The day when the AI will be able to make autonomous decisions about diagnosis and treatment, thereby going beyond its role as a support tool, the question of whether its developer should be held responsible for its decisions will be of great importance. The question then is: whose fault is it? AI will necessarily have an increasingly important impact and role in the years to come in the relationship between doctors and patients, who must be bound by fundamental ethical principles that have guided clinicians through all these years.

The spectacular advances in AI in radiology is probably one of the largest and most beneficial in the world. The possibilities are unimaginable; it is already estimated that an average time saving of 80% can be achieved in planning appropriate treatments for multiple diseases. However, technological advances are often followed by a certain loss of human clinical or technical skills and a gain for others. Despite the fact that AI attempts to understand and model the capacities that characterize intelligent behaviour by mimicking the thinking of a highly qualified and specialized physician, the fact remains that algorithms are not, and never will be, human.


Translated by Bianca Lalanne Rousselet 

This content has been updated on 06/09/2020 at 13 h 23 min.