An Alan Turning Institute Network Award
Research in AI at the University of Nottingham is as rich, as it is diverse. We have over sixteen themes of foundational and applied AI research across our schools and doctoral training centres. Our world-class research is embedded in solving real-world problems. The University of Nottingham covers research that ranges from understanding impaired learning in patients with neurological conditions, to using deep learning to analyse aerial imagery for land use classification, to evaluating the impact of predatory bacteria to tackle antibiotic resistance, to give a few examples.
Follow this link for more information on AI research at the University of Nottingham
We are one of the 24 universities to be awarded an Alan Turing Network award. In putting together our programme of activities, we assembled a team of experts from across the university to develop a unique opportunity for researchers from different disciplines to share their experience of tacking some of the methodological challenges. These include finding ways to automate the lengthy process of data preparation and dealing with scalability, to developing robust approaches for the verification of AI decision support systems
For our Alan Turning Network award we are focussing on the theme of Accessible AI at Nottingham, addressing the risks of lack of accessibility into AI and growing our understanding of what leads to exclusion, reducing the digital divide, and challenging algorithmic bias caused by poor data and understanding.
Our activities are designed to reduce the digital divide, particularly Challenging the algorithmic bias caused by poor data and understanding.
As such we are considering accessibility in its broadest sense. We want to fffer opportunities for diverse researchers to engage and build public trust in the transparency of AI decision making through facilitating pro-active engagement. We seek to democratise AI and data science by empower citizens to access, understand and exploit the world’s data.
We want to support the design and development of interactive data visualisation approaches that enable meaningful engagement and broaden understanding. We will be hosting a student competition using publically available health data sets with prizes for the best visualisations. Accouncement of competition will be posted here on the 15th of December 2022.
The aim of Accessible AI@Nottingham and our network activities is to build public trust through promoting transparency of AI decision making. We have designed a series of activities for pro-active engagement and aim to empower people to be confident in accessing, understanding and exploiting data.
Award Lead: Praminda Caleb-Solly
Project CoIs: Anna-Maria Piskopani, Stuart Marsh, Ender Ozcan, Yordan Raykov, Cristina Vrinceanu, Steve Benford, Mark Van Rossum, Christopher Woodard, Tony Pridmore, Doreen Boyd, Zachary Hoskins, Alexander Kasprzyk, Nicholas Watson, James Goulding, Paul Grainge
Knowledge Cafes enable a conversational process that brings a group of people together to share ideas and learn from each other. The aim is to get a deeper understanding of a topic and issues involved and explore possibilities. There is no attempt to make decisions, but about faciliatating relaxed and informal conversations to help people get a better understanding of an subject area so that they can contribute to discussions that shape future developments.
We have held two knowledge cafes over the course of this project to help people find out about assistive robots, AI and data privacy.
One of the greatest political, social, and economic challenges of the 21st century for western societies with an ageing population is to consider how to maintain a high standard of health and well-being. One solution is the use of intelligent assistive robots for use in the health and social care sector.
In order to develop these robots, accurate datasets play a critical role. Artificial Intelligence (AI) has become a key element of these robots. Every AI model is trained and evaluated using data, quite often in the form of static datasets. Researchers are depended on the data samples to train and test algorithmic systems in order to develop AI embodied robots. Data from participants in research project are needed for the design, programming, construction and testing of these robots. These new technologies hold not only advantages but also a variety of concerns regarding their direct and indirect effects on society.
In our Knowledge Cafes, we invited members of the public to see some assistive robots in action and then participate in a discussion on the benefits and risks of these robots, how their data could be used and how GDPR, and AI regulation, and ethics attempt to minimise these risks.
This event was crucial for creating the opportunity for public debate and awareness of AI in healthcare, and supporting responsible research and innovation.
Here are some of the issues highlighted by the participants during the discussions as illustrated by Sam Church