Contribute to the discussion about AI Accountability

We are organising an online “round table” event to discuss ideas and experience related to Accountability of AI Systems. The event will be held via Teams or Zoom between 10am-1pm (BST) on Monday 20th September 2021.
 
This event is one of a series that we are organising with different stakeholder groups. We have already held an event with lawyers earlier this summer and now we are looking for industry professionals with technical expertise in building and operating AI systems (with particular focus on ML). You are not required to possess direct experience of “AI accountability” as full context for the discussion will be set by the project team. Places are limited and we expect to have 10 -15 attendees only in order to provide a suitable platform for discussion. 
 
As part of the event we will also demonstrate our initial software prototype for the Accountability Fabric to stimulate discussion and to elicit feedback.

Please contact us if you are interested in attending.

CfP Workshop on Reviewable and Auditable Pervasive Systems (WRAPS)

Members of the RAInS project are running the 2021 Workshop on Reviewable and Auditable Pervasive Systems (WRAPS) in conjunction with UbiComp.

Location: Virtual

Important dates:

  • Paper submission: 15th June, 2021
  • Author notification: 15th July, 2021
  • Final camera ready due: 23rd July, 2021
  • Workshop date: 25/26th September, 2021 (exact date/time to be confirmed)

All deadlines are 23:59 AoE.

Website: https://wraps-workshop.github.io/

*** This workshop will bring together a range of perspectives into how we can better audit and understand the complex, sociotechnical systems that increasingly pervade our world.
From tools for data capture and retrieval, technical/ethical/legal challenges, and early ideas on concepts of relevance – we are calling for submissions that help further our understanding of how pervasive systems can be built to be reviewable and auditable, helping to bring about more transparent, trustworthy, and accountable technologies.***


Emerging technologies (e.g. IoT, AR/VR, AI/ML, etc.) are increasingly being deployed in new and innovative ways – be it in our homes, vehicles, or public spaces. Such technologies have the potential to bring a wide range of benefits, blending advanced functionality with the physical environment. However, they also have the potential to drive real-world consequences through decisions, interactions, or actuations, and there is a real risk that their use can lead to harms, such as physical injuries, financial loss, or even death. These concerns appear ever-more prevalent, as a growing sense of distrust has led to calls for *more transparency and accountability* surrounding the development and use of emerging technologies.
A range of things can—and often do—go wrong, be they technical failure, user error, or otherwise. As such, means for the ability to *review, understand and act upon* the inner workings of how these systems are built/developed and used are crucial to being able to determine the cause of failures, prevent re-occurrences, and/or to identify parties at fault. Yet, despite the wider landscape of societal and legal pressures for record keeping and increased accountability (e.g. GDPR and CCPA), implementing transparency measures face a range of challenges.
This calls for different thinking into how we can better understand (interpret) the emerging technologies that pervade our world. As such, this workshop aims to explore new ideas into the nascent topic, collating some of the outstanding challenges and potential solutions to implementing more meaningful transparency throughout pervasive systems. We look to bring together experts from a range of disciplines, including those of technical, legal, and design-oriented backgrounds.

Submissions:

We invite papers of varying length from 2 to 6 pages (excluding refs) using the ACM sigconf template (https://www.acm.org/publications/proceedings-template). Submissions can be made via PCS at https://new.precisionconference.com/submissions.  

Accepted papers will be published in the UbiComp/ISWC 2021 adjunct proceedings, which will be included in the ACM Digital Library. All submissions will be peer reviewed, and should be properly anonymised.

Some suggested topics include (but are not limited to):
- Tools, techniques, and frameworks to assist in providing greater transparency & oversight over the workings of pervasive systems
- Methods for explaining & understanding systems/models
- Methods for fostering trust & transparency in pervasive systems
- The usability of audit data
- Performance implications of capturing audit data
- Privacy, security and data protection implications of auditability mechanisms
- Vocabularies and frameworks for modelling relevant information to support auditability & explainability
- Data aggregation and consolidation
- Legal considerations relating to record keeping & auditing mechanisms
- Access controls & data sharing regimes
- Audit log verification methods

Join us for Explorathon 2020

The RAInS team are participating in two events as part of Explorathon 2020, Scotland’s participation in European Researchers’ Night, a European Commission funded initiative. Both events are open to the public, online, and free and will take place next week (23 and 27 November). Details for each event can be found below.

Event: When AI gets it wrong: Who’s to blame for technology’s failure?

When: Monday, 23 November 2020 @3.00pm
Where: Zoom
How: Register here
About the event:

Artificial Intelligence (AI) is being increasingly used in applications from health care to transport to finance. But who do we hold to account when something goes wrong? As we work to remove the human from decision-making processes, what information do we still want to know about who decided how decisions should be made and how the machine is programmed to behave?

During this one-hour, interactive event, we will explore the decisions that are made in developing these systems – including the decisions made by the human designers, builders, and operators, and users of such AI systems and you will be given the chance to “vote” on the best outcomes for proposed scenarios before learning what is happening behind the scenes of those AI systems.

Register for this event at https://www.eventbrite.co.uk/e/explorathon-2020-when-ai-gets-it-wrong-tickets-125646324539

Event: Scottish Research Showcase Flashmob

When: Friday, 27 November 2020 @TBD
[Note: the full event runs 9.00am – 9.00pm; our timeslot will be announced on Monday, 23 November]
Where: Twitter
About the event:
The Scottish Research Showcase, in collaboration with the Global Science Show, is reaching out to audiences around the Twittersphere to create a digital “flashmob” of science and learning. Throughout the event, researchers will showcase their work with the world by sharing short videos as part of a Twitter thread coordinated by the @ernscot account. This will create a chain of research from a wide variety of disciplines.

You can follow along by visiting the Explorathon Scotland Twitter account (@ernscot). We will also direct people to the event on our Twitter account (@RAInS_Project), so follow us there so that you don’t miss a beat!

Upcoming Event - AI: The Good, the Bad, and the Ugly, at Explorathon’19

RAInS is participating in Aberdeen’s Explorathon’19. Drop in to our interactive event, where we will explore the accountability and transparency challenges of AI, with particular focus on facial recognition technology. 

Admission is free and no booking is required.

When: Wednesday 25 September 2019 6:00 pm - 7:00 pm

Where: Aberdeen Central Library, Committee Room. Rosemount Viaduct, AB25 1GW.

Event URL: https://www.explorathon.co.uk/events/ai-the-good-the-bad-and-the-ugly/

Event Information: Algorithms and intelligent systems are everywhere; in the news, in our daily lives and hidden in plain sight. There is a rush to make use of these technologies in areas such as healthcare, transport and security, because of the many benefits that they can bring. However, when things do go wrong, how do we determine the cause of failure? Who might be held accountable? At this interactive event, we will explore the accountability and transparency challenges of AI, with particular focus on facial recognition technology. We ask: what are the legal ramifications if these systems result in harm?

RAInS Launched!

A new project has begun at the University of Aberdeen. Realising Accountable Intelligent Systems (RAInS) is funded by the EPSRC and is led by Professor Peter Edwards.

The RAInS project aims to realise processes by which intelligent systems can be made accountable, by developing an accountability fabric for use by a variety of stakeholders. The project will use computational models of provenance – as a substrate for enabling trust; such a mechanism facilitates transparency and accountability by recording the processes, entities and agents associated with a system and its behaviours-supporting verification and compliance monitoring.

This project is a collaboration between the University of Aberdeen, The University of Cambridge, and Oxford University.