The media, when they talk about artificial intelligence (AI), often jump from optimism to post-apocalyptic pessimism. Those supposed “machines that learn anything without human intervention” and that “are about to reach self-consciousness” will bring us incredible achievements on Tuesday morning; On Thursday afternoon it will be revealed in a televised gathering how there are both positive and negative uses of machine learning (robotic learning) and the Sunday science supplement will reveal how “strong AI” will make humanity obsolete.
And we arrive at dystopia when we talk about the role that artificial intelligence could assume or is assuming in the political and social sphere. In this context, algorithms are only a source of injustice, manipulation, and surveillance, and nothing and no one can stop the wave of digital fascism and paternalism.
I want to offer a middle ground here and argue that, although the dangers of digitizing political communication, public health, or judicial decisions are real and plausible, the opportunities are also, and machine learning algorithms can become tools for social and political justice.
There is a lot of talks lately about the need for an ethic for artificial intelligence. Unfortunately, most of the discussion centers on remote and unreal problems. It would certainly be irresponsible to develop a “strong” artificial intelligence, one similar or even superior to humans, without adding a series of very clear ethical controls. But it is even more irresponsible to fill books and screens about that future singularity,
And I am not referring here to the much-debated question of who should an autonomous car kill when it is necessary to choose between two possible collision trajectories: avoid a frontal collision that would kill the driver, even if it involves running over five people waiting for the bus? ? Take away the old woman or the baby? The literature is endless right now and a quick search for “ethics” and “autonomous car” will give us hundreds of references in a moment.
Most ethical issues are not predictable
The tram dilemma applied to autonomous cars is the perfect example of the solution in search of a problem. In a peculiar version of the Procrustean bed, some engineers and economists want to turn ethical problems into probability assignment processes based on user feedback on their preferences. Hence his interest in turning any ethical problem into a version of the tram dilemma. For better and for worse, ethics are much more complicated.
In the same way that practically no one in 2005, when talking about web 2.0, imagined that social networks would become instruments of political manipulation, Thus find it more useful to discuss problems that have already arisen and seek solutions for them and wait for new problems to start to make sense before developing methodologies to deal with them.
To develop these algorithms we collect data from previous decisions, we indicate when the decisions were correct and we trust that this algorithm will be able to capture the relevant regularities and correlations that allow decisions to be made at least as appropriate as the people who made it before, if not better.