My primary research interest involves how trust reasoning should be integrated into formal argumentation frameworks for multi-party dialogues to identify and extract the most trustworthy conclusions. Much of my work has focused on meta-argumentation frameworks for multi-party dialogues in which participants consider how much they trust each other and the advanced arguments in order to define their preferences over the arguments, given that arguments (or information that supports the arguments) from more trustworthy sources may be preferred to arguments from less trustworthy sources.
At the moment, I am working on trust building AI technologies. In particular, I am investigating the factors that affect trust in the algorithms of online digital platforms and developing tools that can be used to engender/rebuild trust on these platforms.