Requirements of Trustworthy AI

Requirements of Trustworthy AI

Today, Artificial Intelligence is seen by many as a technology that can transform society. AI systems can be used to improve human welfare and freedom and thus in the service of the common good. However, AI systems can also give rise to risks concerning trust, racial profiling, robustness and unintentional harms to humans. The European Commission has worked to prepare an AI strategy with guidelines for reaping the benefits of AI systems. The core values of any AI system should focus on respect for human rights, democracy and the rule of law.

To get a consensus of what Artificial Intelligence is, the following definition is proposed within the European Commission’s communication on AI.

“Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.  AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).”

Artificial Intelligence can be seen as computer intelligence and it has many subfields, One subfield of AI is called Machine learning, neural network, deep learning, and decision trees. These techniques allow the AI system to learn how to solve problems and take data to learn for themselves.

7 key Requirements that AI should meet in order to be trustworthy

Based on fundamental rights and ethical principles, a guideline with seven key requirements has been prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG).
These requirements are aimed for different stakeholders in the AI systems’ life cycle: developers, deployers, end-users, and society. The requirements include systemic, individual and societal aspects:

1.Human agency and oversight

AI systems should support human fundamental rights and in designing AI systems the impact of a fundamental right assessment should be undertaken. It should also be designed with respect to user autonomy and not make decisions solely based on automated processing. The systems should also have a governance mechanism with human intervention in every decision cycle of the system.

2.Technical robustness and safety

AI systems should be developed with a preventive approach to risks and focus on minimizing unexpected harm. It should be protected against security vulnerabilities and secure. If AI systems directly affect human lives, it should have a high level of accuracy and have a clear evaluation process for making inaccurate predictions. A reliable AI system should also be reproducible.

3.Privacy and data governance

AI systems need to guarantee privacy and data protection, as well as providing information generated by the user in the system. AI systems need to ensure that the input data is not discriminating or profiling an individual’s preferences such as sexual orientation, age, gender or political views.

4.Transparency

Those developing and deploying AI systems need to document data gathering, data labeling and how the algorithms are used. This includes the aspect of traceability and explainability. The importance of explainability includes both the technical process and the related human decisions.

5.Diversity, non-discrimination and fairness

AI system needs to take carefully into consideration about their data sets and strive to avoid unfair biases. This is especially crucial in the data collection phase and mechanism should be put into place to carefully tackle inadvertent historic bias, incompleteness and bad governance models.

6.Societal and environmental well-being

AI systems should aim to benefit all humans beings including future generations. This means that AI systems should address areas of global concerns and assess environmental friendly solutions.

7.Accountability

AI systems should be open for audits and assessment of algorithms, data, and design process. It is also necessary to put in place mechanisms for responsibility and accountability of AI systems.

Info-graphic of the 7 requirements

euai

Source:

European Commision – Futurium – Read the AI Strategy report

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s