There is increasing interest in the pursuit to ensure that Artificial Intelligence (AI) systems are not only ethically sound, but that the decisions they make are safe, and conducive to humanity. Traditionally, this problem is viewed as the pursuit to embed human values in the decision architecture of the AI system, but if we extend this problem to embedded AI – aka smart robots – how can we begin to embed human values in a robot?

One answer to this question is to look – cognitively – at what moral decision-making constitutes and then attempt to model the mechanism that brings about this phenomenon. Bringing together perspectives from psychology, philosophy as well as robotics and computing, the aim of this workshop is to begin to create a community of researchers/academics/industry interested in addressing this problem. The following questions will drive this workshop:

- What constitutes ‘moral decision-making’?
- Why (if at all) do we need machines with the ability to tell between right and wrong?
- What is the best way to model morality?
- What further research needs to be done to develop robots with moral agency?

Event Programme

08.30 Arrival and breakfast
09.00 Host introduction and workshop overview
09.45 1st Keynote presentation
10.30 Break
10.45 2nd Keynote presentation
11.30 3rd Keynote presentation
12.15 Lunch and university tours
13.15 4th Keynote presentation
14.00 5th Keynote presentation
14.45 Break
15.00 Discussion and consolidation exercise
17.00 Close

Location and travel details

AIRC, Ideas Space, 探花精选

Who should attend

Invited attendees/interested researchers/industry

Cost

Free to attend