Artificial intelligence represents an opportunity for social progress similar to the industrial revolution or the implementation of the first computers. However, it can also represent the end of many of the freedoms that society has enjoyed so far.
This technology has given us wonderful applications, such as early detection of cancer, personalized education, or optimizing energy consumption. But also other scary ones, such as the generation of fictitious videos of real people, the identification of a person through a simple photo, or the creation of cybernetic soldiers. How do you control the advancement of a technology that has the potential for both good and bad?
The movie Minority Report already presented us with a futuristic world in which we could predict when a crime was going to be committed in order to avoid it. A universe that may not be that far away, taking into account the current applications of artificial intelligence to predict the probabilities of crime and to identify people through video cameras.
The film posed the ethical conflict of condemning someone for something they hadn’t done yet: what if the prediction is wrong?
Questions about ethical and scientific limits historically associated with sanitary or social science pathways have also been transferred to artificial intelligence. Traditionally, ethics committees have been in charge of approving the research that could be carried out.
While the boundaries in other contexts are more clearly defined, the rapid advancement of artificial intelligence is blurring them. Where is the line that separates an application of artificial intelligence that is positive for society from one that affects the freedom and privacy of citizens?
Setting the boundaries
This issue is at the intersection of multiple disciplines, such as philosophy, privacy, ethics, and security. For this reason, it must be multidisciplinary committees of experts at the European level that lay the foundations of the type of society we want to become in a few decades.
These decisions can be transferred to researchers through the type of projects that are financed in competitive calls and be regulated through the ethics committees of each research institution.
For their part, the states must apply them in their national programs, an issue that may arise when they have to make sensible decisions in citizen security or defense competencies.
Introducing programs to evaluate artificial intelligence initiatives implemented on a large scale and placing society at the center of these applications will be two fundamental keys so that they do not generate rejection and deviate from the right path.
The established line should not be immovable and should be reviewed periodically as research and applications of artificial intelligence evolve and the impact they have on society can be analyzed.
The role of citizens
These guidelines may guide the type of proposals that are proposed and implemented in public and private institutions, but should we control what is already within the reach of the average citizen?
The democratization of technology is allowing citizens to use tools in domestic environments that were previously only accessible to scientists.
The Netflix documentary Unnatural Selection reflects this trend by showing how some people purchase kits to perform gene editing at home with CRISPR. Thus, he raises the problem of establishing the ethical limits of another technology that has the potential for so many social benefits, but that can also be used to play at being gods.