Deciphering the behaviour of intelligent others is a
fundamental characteristic of our own intelligence.
As we interact with complex intelligent artefacts,
humans inevitably construct mental models to understand
and predict their behaviour. If these models
are incorrect or inadequate, we run the risk of
self deception or even harm. This paper reports
progress on a programme of work investigating approaches
for implementing robot transparency, and
the effects of these approaches on utility, trust and
the perception of agency. Preliminary findings indicate
that building transparency into robot action
selection can help users build a more accurate understanding
of the robot.