5/15/17 · Research

Algorithms, a danger to combating stereotypes and social discrimination

Experts call for transparency and accountability in preparing algorithms
Photo: Unsplash/Carl Heyerdahl

Photo: Unsplash/Carl Heyerdahl

“Today algorithms have reached such a point of complexity that they can decide who has the right to a mortgage, what medical treatment should be prescribed or how much tax must be paid”, explains Jordi Cabot, software engineer and ICREA researcher at the UOC. Facebook, Amazon, Netflix or Spotify are the best-known faces, but companies from other sectors already work with these mathematical formulas. A study by Deloitte Global notes that, in 2020, 95 out of the 100 biggest software companies by turnover will have integrated similar technologies into their products. However, as this researcher warns, algorithms “can create injustices or social discrimination”.

A study by the Pew Research Center explains that algorithms are instructions that include artificial intelligence techniques to optimize decision-making processes. They can save lives, make things easier and conquer chaos but, as pointed out by several international experts at MIT, for example, they can also leave too much control in the hands of corporations and governments, perpetuating biases, creating filter bubbles, reducing creativity, and even result in greater unemployment.

“There are cases of algorithms that have become racist, such as with Amazon, avoiding deliveries to neighbourhoods in the United States with large Afro-American communities, or the Microsoft chatbot that had to be withdrawn immediately because of racist comments”, notes Cabot, also a researcher in the UOC research group SOM Research Lab. Even though in most cases the bias was the product of partial or incorrect data, “the problem is serious”. Automating decisions that affect individual people requires “transparent algorithms” that can be assessed objectively.

And as Javier Borge, researcher at the UOC Complex Systems research group (CoSIN3), also asserts, “algorithms can discriminate not only by race but also by gender, among other things”. For example, for reasons of historical injustice these artificial intelligence operations can learn that most CEOs are white men. “So that when a black woman joins an employment service that uses an algorithm to supply the best candidates, it is possible that the young black woman has fewer chances of being selected”.

“As they are programmed by people, algorithms always have a strong ideological component”, points out UOC Philosophy professor Enric Puig. The very fact of thinking that the world can be ruled by algorithms already implies an “ideological posture”. So the real hidden danger is therefore “to remove human beings from the position of self-government and take away their freedom”. The solution for this expert, director of Institut Internet.org, will only be found by understanding that algorithms are a way of seeing the world at the service of ideologies that can be reinvented, and that utopian thought – completely outside the logic of the algorithm – is a legitimate and desirable posture.

For Miquel Seguró, professor of Philosophy at the UOC and Ramon Llull University (URL) Ethos Chair researcher, “the danger of manipulation is obvious”. The first impression, he explains, is that algorithms are useful for predicting and reducing the degree of uncertainty in a computer calculation: “they combine induction – for example, if a person has bought several horror novels, he will be interested specifically in another in this genre – and deduction – if someone has bought a horror novel he will also be interested in a crime novel or thriller”. Therefore, they have the capacity to “modulate the uncertainty of a desire that the user himself has not yet been able to formulate”. There is also a reduction of diversity: “if you can manipulate desire, you can also channel and reduce it to the variables that interest him”.


“The algorithmic economy”, the post-app era

“The power of current algorithms and the use of big data to make predictions are allowing many new services to be created and automated. This means that the economy is constantly moving”, explains Cabot. According to Gartner, in 2020, intelligent agents, that is, complex algorithms with the capacity for self-learning and communication using natural language with human beings, will provide 40% of mobile transactions, and the post-app era will begin to dominate. By then, “consumers will have forgotten about the apps and instead will trust intelligent agents or personal assistants in the cloud, such as Cortana, Google Now, Siri, and Tiro,” which he classifies as the first algorithms.

In fact today, as Borge explains, in the insurance sector there are many companies in the US that install devices in cars that store all the activity (acceleration, braking, speed, hours of driving, etc.). All these variables, with the appropriate algorithm, “will determine the cost of every individual's insurance”. In the banking sector, for example, by using credit cards they know what you bought, when, where and from whom, and how much it cost, and all of this “helps them develop personalized advertising strategies (for example, BBVA has a data lab just for this; so does Telefónica).”

Moreover, in the commercial sector it is possible to know about eating habits, shopping frequency and amount spent through, for example, supermarket loyalty cards. “With the recent regulation in the US, owners of browsers (Explorer, Chrome, Firefox, Safari, etc.) are allowed to sell the browsing history of their users and this clearly provides opportunities for personalized advertising”, claims the researcher. And public transport travel cards allow for monitoring the number of entrances, exits and transport interchanges (for example, metro to bus or the reverse). “With this the authorities can better plan urban mobility, anticipating the more ‘dangerous’ zones and times (accidents, attacks), etc.”, he adds. 


Principles for algorithmic transparency and accountability

Given the power of algorithms to determine the decisions of society, in order to raise awareness of the seriousness of a possible manipulation, voluntary or otherwise, of data, the Association for Computing Machinery (ACM) – one of the biggest international computing associations and of which Cabot is a member – has proposed seven principles for algorithmic transparency and accountability. For the UOC researcher, they are an important tool for reflecting on how everyone can try to apply them in their daily work:

  1. Awareness. Owners, designers, builders and users of algorithms should be aware of the possible consequences of their use and the possible biases they can cause.
  2. Access and redress. Regulators must have the mechanisms to redress harm caused to individuals by the biases detected in the algorithms.
  3. Accountability. Institutions are responsible for the decisions made by their algorithms, even though they are unable to explain how these decisions have been made.
  4. Explanation. Especially when they affect public policies, the institutions that use algorithms to make decisions must explain the processes followed by the algorithms and the decisions they make.
  5. Data provenance. Algorithm builders must explain the characteristics of the training data (where and how it was obtained, etc.) as well as an exploration of the possible biases of the algorithm caused by the use of this data. To avoid problems concerning privacy or commercial secrets, access to this training data can be restricted to people not explicitly authorized.
  6. Auditability. Models, algorithms, data and decisions made must be recorded in case an audit is needed in the future.
  7. Validation and testing. The institutions must use rigorous methods to validate the models (and to document). Specific tests must also be performed to detect possible biases in the algorithm. The results of these tests should also be made public.

 

Press contact

You may also be interested in…

Most popular

See more on Research