Espace client
News

[ARTICLE] Building a Future with AI in Europe (2 / 2)

The European Declaration, drawn up by the European Commission in June 2018, aims to promote a trustworthy AI for Europe in respect of 3 founding values which are fundamental rights, democracy and the rule of law. It has three characteristics that ideally should work together throughout the life cycle of the system, which are as follows:

  1. It must be lawful, ensuring compliance with applicable laws and regulations.
  2. It must be ethical, ensuring adherence to ethical principles and values; (ensures compliance with ethical standards, including fundamental rights as special moral rights, ethical principles and related core values).
  3. It must be robust, both technically (adapting to a given context, such as the scope of application or life cycle phase) and socially (the AI system takes due account of the context and environment in which it operates) because, even with good intentions, AI systems can cause unintended harm.

If tensions were to arise between these three characteristics, society would have to address these issues.

batir-un-avenir-avec-ia-europe

This declaration aims to develop systems that respect human autonomy, prevent infringement, and enforceability. It is thus based on principles similar to the Montreal Declaration, paying particular attention to situations of inequality. It also encourages training, education, innovation, reflection and discussion on ethics and Artificial Intelligence systems.

It also tends to show the benefits and risks of artificial intelligence. Thus, the declaration presents 7 concepts that these systems must imperatively respect in order to meet a trustworthy AI.

The concepts set out below have led the authors of this Declaration to reflect on the following questions in particular:

1. Human action and human control

(Fundamental rights)
Does the AI system strengthen or increase human capacity?

2. Technical robustness and safety

(attack resilience and security, contingency planning and general safety, accuracy, reliability and repeatability)
Have you considered the potential impact or safety risk to the environment or animals?

3. Privacy and data governance

(respect for data quality and integrity and access to data)
Have you taken steps to strengthen privacy?

4. Transparency

(traceability, explicability and communication)
Have you assessed the extent to which the system’s decision influences the organization’s decision-making processes?

5. Diversity, non-discrimination and equity

(absence of unfair bias, accessibility and universal design, and stakeholder participation)
Have you thought about the diversity and representativeness of users in the data?

6. Societal and environmental well-being

(sustainability and respect for the environment, social impact, impact on society and democracy)
Do you have measures in place to reduce the environmental impact of the life cycle of your AI system?

7. Responsibility

(minimizing and communicating negative impacts, arbitrations and appeals)
Do you have training and education frameworks in place to define accountability practices?

This document encourages Europe to position itself as the home and world leader of ethical and cutting-edge technology. Trustworthy AI will enable respect for human rights, democracy and the rule of law. Artificial intelligence can provide sustainable solutions for quality of life, human autonomy and freedom. It can also help improve health and dissolve inequalities. Nevertheless, it still raises concerns about people’s well-being, decision-making and security.

[ARTICLE] Building a Future with AI in Europe (2 / 2) 1

AI systems must respect, protect and serve the physical and mental integrity of individuals by allowing them to maintain their personal and cultural identity and to have absolute control over their choices and freedoms. Artificial intelligence must therefore preserve the plurality of individuals as well as respect their personal data. Indeed, AI can deduce a person’s preferences, age, social class, sexual orientation, religion and political opinions through the digitisation of human behaviour.

The principles of respect for human autonomy, prevention of harm, fairness and explainability should be adhered to. AI systems must not subordinate, coerce, deceive, manipulate, condition or govern human beings. On the contrary, they must enhance, complement and promote social, cognitive and cultural competencies. Processes must be transparent and must not harm individuals.

AI systems should not influence individuals but help them make more informed choices in relation to their goals. They must not impact on the social connections of humans.

A future “trustworthy IA” label would ensure by reference to precise technical standards that the system is compliant in terms of safety, technical robustness and explainability”. It will strongly encourage innovation, communication, training and communication to ensure that the positive and negative aspects of these systems are disseminated. The latter will thus be able to participate in guiding the development of society.

Finally, a trustworthy AI represents a great potential to contribute to the alleviation of the problems facing our society (ageing population, increasing inequalities and pollution). In this way, these systems could make a lasting contribution to reducing the impact of humans on the environment and enable more efficient energy use and consumption. Therefore, AI will help reduce the environmental footprint in order to have a greener society.

To conclude, these two declarations explain essential principles that nevertheless need to be the subject of a proactive and more in-depth reflection applied to the present situations and the developments envisaged by our society.


To find the first part of the article, please refer to this link.

0 / 5 5
ESPACE CLIENT