top of page

THEMIS 5.0

Human-centered trustworthiness optimisation in hybrid decision support

GA Number: 101121042

Start date: Oct, 1st, 2023

Duration: 36 months

EN_FundedbytheEU_RGB_POS.png

What is THEMIS 5.0 about?

THEMIS 5.0 draws researchers and practitioners from diverse disciplines in order to secure that AI-driven hybrid decision support is trustworthy and takes place in accordance with the particular human user needs and moral values as well as adhere with the key success indicators of the embedding socio-technical environment. It implements an AI-driven, human-centered Trustworthiness Optimisation Ecosystem that users can use to achieve fairness, transparency, and accountability.


In THEMIS 5.0, the trustworthiness vulnerabilities of the AI-systems are determined using an AI-driven risk assessment approach, which, effectively, translates the directions given in the Trustworthy AI Act and relevant standards into technical implementations. THEMIS 5.0 will innovate in its consideration of the human perspective as well as the wider socio-technical systems’ perspective in the risk management-based trustworthiness evaluation. An innovative AI-driven conversational agent will productively engage humans in intelligent dialogues capable of driving the execution of continuous trustworthiness improvement cycles.


THEMIS 5.0 adopts the European human-centric approach to the design, development, deployment and operation of the THEMIS 5.0 ecosystem and, in this respect, THEMIS 5.0 will base the implementation of its AI-driven ecosystem on strong co-creation processes. THEMIS 5.0 will pilot and evaluate the humancentric ecosystem using 3 well-defined use cases, each addressing a specific high priority and critical application and industrial sectors.


The THEMIS 5.0 solution enhances and accelerates the shift towards more trusted AI-enabled services by unlocking the power of humans to evaluate the trustworthiness of AI solutions and provide feedback on how to improve the AI systems. Users can now better challenge AI systems, pinpoint any biases or problems, embed their own values and norms, and provide feedback to AI developers and providers for improvement.

Contact

Hara Stefanou (Gruppo Maggioli)

bottom of page