Earlier this month I attended one of the regular BRLSI philosophy talks. Andreas Wasmuht prepared an excellent introduction to the work of Martin Heidegger, specifically his work Being and Time (first published in 1927). This post is not specifically about that work, but it got me thinking again about how much more … Continue Reading ››
I'm just back from Copenhagen where we ran a roundtable on AI assurance, asking how we can best use tools to reduce risk and address compliance with upcoming regulations, particularly the European AI Act. The speaker details are in the link, and there will be a full video too. We had about … Continue Reading ››
Yesterday was my last day of employment at The University of Bath. I have taught Robotics and Autonomous Systems for four years, 12 Semesters. It's been a great experience, especially the contact with students in lectures, labs and tutorials. But now it is time to move on. The workload is relentless and gruelling. The last … Continue Reading ››
Delighted to be part of the programme @IETevents new webinar series on Responsible AI, exploring cutting-edge technology and future applications over seven sessions in November and December. Booking is open now: http://ow.ly/Kcdg30rWpTu
This year has been hectic, to put it mildly. The significantly increased workload of online teaching, combined with students' increased anxieties and the consequential increase in pastoral activities, combined to become an all consuming void that could not be filled --- however many hours one worked. On top of all that, the uncertainties of rule changes and the chronic lack of advance notice provided to HE by Government increased the number and frequency of staff and teaching related meetings. All in all a not-to-be-repeated experience. Work/Eat/Sleep/Repeat for 9 months solid.
Having said all that, there were some highlights, and I thought it worth mentioning them in this blog. Firstly, I was fortunate to have some really great PGT students this year. They threw themselves into the online teaching with exceptional engagement, and as a result many achieved some excellent coursework and exam results. They had to learn robotics software and hardware design completely remotely using laptop based simulation tools, accommodating Mac, PC, and sometimes even linux environments. I've written a blog about teaching with simulation, and it includes some video where students demonstrate their work. Here is the first of two videos:
From January though to May I ran our third year undergraduate group design and business project within the department of electronic & electrical engineering. Students work in groups, producing technical feasibility studies, designs and ultimately full business plans for real-world projects. Highlights included an online Dragon's Den event, with representatives of our industry advisory board acting as Dragons, and also an in depth assessed final group presentation and design exhibition, again all carried out online. I really enjoy working with well motivated students, and our third years handled the online delivery comfortably.
Remarkably, I've also squeezed in some time to continue participation on the IEEE P7001 standards working group, and we've just published a Frontiers in Robotics and AI journal article explaining our approach to producing the standard. We've been working on this for several years now, so to be close to a published standard and to have this paper out is particularly satisfying given the timing. Huge thanks must go to Prof Alan Winfield for leading this work.
Overall, it's been a long hard year, but nevertheless there have been some highlights, and some lasting success.
"I firmly believe that any man's finest hour, the greatest fulfilment of all that he holds dear, is that moment when he has worked his heart out in a good cause and lies exhausted on the field of battle - victorious." --- Vince Lombardi, 1913-1970
The High-Level Expert Group on Artificial Intelligence (HLEG) have produced a new tool for the assessment of trustworthiness of AI systems. The Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a tool to assess whether AI systems at all stages of their development and deployment life cycles comply with seven requirements of Trustworthy AI.
The HLEG was set up by the European Commission to support the European Strategy on AI. Created in June 2018, HLEG produce recommendations related to the development of EU policy, together with recommendations related to the ethical, social and legal issues of Artificial Intelligence. HLEG is comprised of over 50 experts, drawn from industry, academia and civil society. It is also the steering group for the European AI Alliance; essentially a forum that provides feedback to the HLEG and more widely contributes to the debate on AI within Europe.
The ALTAI Tool
Based on seven key requirements, the new ALTAI tool is a semi-automated questionnaire allowing you to assess trustworthiness of your AI system. It does rely on honest answers to the questions of course! The seven key requirements are identified as:
- Human Agency and Oversight.
- Technical Robustness and Safety.
- Privacy and Data Governance.
- Diversity, Non-discrimination and Fairness.
- Societal and Environmental Well-being.
Using the system is relatively straightforward. First, you must create an account and log in to the ALTAI web site. Then choose 'My ALTAIs'. The system allows you to complete, store and update multiple ALTAI questionnaires. Once you have completed the questionnaire, the system produces a graphical representation of your 'trustworthiness' (the spider graph above), together with a set of specific recommendations based on your answers. Note that the ALTAI website is a prototype of an interactive version of the Assessment List for Trustworthy AI. You should not use personal information or intellectual property while using the website.
I found the system easy to use, but would have liked to see a graph/tree of how the question boxes are arranged, and some clearer explanation of the red and blue outlines - in short, more system transparency of the assessment system!
I also have some reservations related to the independence of the person completing the assessment, and the possibility of bias when someone closely involved in an AI development project is tasked with assessing it. This could be improved by using an independent, suitably qualified and competent auditor.
It's very encouraging to see the emergence of these kinds of audit systems specifically targeted towards the deployment of AI technologies. Hopefully as these systems develop they will align with the international standards that are currently being developed - for example the IEEE Ethically Aligned Design standards, such as P7001 for Transparency of Autonomous Systems.
After many months of writing, proof reading and waiting for printing, I'm delighted that my book is now available. It's a very practical book, explaining why transparency is so important, followed by the details of experiments with various forms of transparency.
The book is based on my PhD research, but is expanded and extended, including an additional chapter to explain the importance of transparency within the wider context of accountability, responsibility and trust (ART). Here is a short extract from that new chapter:
Transparency as a Driver for Trust
.... I argue that although trust is complex, we can use system transparency to improve the quality of information available to users, which in turn helps to build trust. Further, organisational transparency drives both accountability and responsibility, which also bolster trust. Therefore transparency is an essential ingredient for informed trust. These relationships are illustrated in Figure 2.3.
System Transparency helps users better understand systems as they observe, interact or are otherwise affected by them. This informed understanding of system behaviour in turn helps users make appropriate use of systems.
System Transparency also supports accountability, by providing mechanisms that allow the system itself to offer some kind of ‘account’ for why it behaves as it does. It also provides mechanisms to facilitate traceability....
Organisational Transparency supports and encourages organisational accountability, and helps to develop a culture of responsibility....
Trust is built from a combination of informed system understanding, together with the knowledge that system providers are accountable and behave responsibly. Trust ultimately drives greater product acceptance and use....
In this book I also argue for the creation of transparency standards applicable to Autonomous Intelligent Systems (AIS) of all kinds. Standards will encourage transparency, and regulation may enforce it. This encourages business to develop cultures that embrace transparency in their processes and products.
Wortham, Robert H., Transparency for Robots and Autonomous Systems: Fundamentals, Technologies and Applications, The Institution of Engineering and Technology, 2020
ISBN-13: 978-1-78561-994-6 (eBook ISBN: 978-1-78561-995-3)
This year I've been developing a project with a final year undergraduate in Computer Science at Bath, integrating the Instinct Reactive Planner with the Robot Operating System (ROS). The project has gone really well, resulting a flexible and powerful framework to enable the integration of ROS based robots with Instinct. The target platform used for the project is the Husarion ROSbot (shown). There is also a short video. For further details please contact me.
Today I spent a couple of hours in hackathon mode with fellow members of the AmonI (Artificial Models of Natural Intelligence) research group at Bath . We decided it was time to bring the AmonI web pages up to date, so that the web site properly reflects our previous and current … Continue Reading ››