This problem is a UK government area of research interest (ARI) that was originally posted at https://ari.org.uk/ by a UK government organisation to indicate that they are keen to see research related to this area.
How should government maintain trust and accountability in using AI and machine learning? What is the public appetite for government making use of these in decision making?
Our aim is to support government and other public sector organisations in finding and exploiting emerging technologies and other innovative solutions to operational service and policy delivery challenges.
Contact details
Should you have questions relating to this ARI please contact co_aris@cabinetoffice.gov.uk. If your query relates to a specific question please state its title in your email.
Related UKRI Projects
- Seclea Platform - Responsible AI Tools for Everyone
- Seclea – Building Trust in AI
- Democratise access to AI governance through bringing responsible AI platform providers together and enabling access to SMEs
- FAITH: Fostering Artificial Intelligence Trust for Humans towards the optimization of trustworthiness through large-scale pilots in critical domains
- TrustMe: Secure and Trustworthy AI platform
- People Powered Algorithms for Desirable Social Outcomes
- Using Machine Learning to make the best use of Innovate UK’s operational data.
- FAIR: Framework for responsible adoption of Artificial Intelligence in the financial seRvices industry
- Enhancing AI Assurance through Comprehensive Compliance, Risk Management, and Explainability Solutions
- FRAIM: Framing Responsible AI Implementation and Management