I am interested in all aspects of agreement technologies such as computational trust, argumentation and reputation models and how these models can be used to minimize inherent inconsistencies in the interactions among autonomous agents in open multi-agent systems. Specifically, I am interested in normative, descriptive and reasoning models of communication, coordination, collaboration and dialogue among artificial and human agents with a view of of resolving conflicts of opinions and making informed and reliable decisions with the information obtained.
In my PhD, I studied how trust evolves in a multi-party dialogue and how such trust affects the justified conclusions obtained from the dialogue. In particular, I investigated the question of how a dialogue participant can assign trust ratings to other dialogue participants and their arguments in a dialogue and use the trust ratings as basis to define its preferences over the arguments and draw justifiable conclusions, given that arguments (or information that supports the arguments) from more trustworthy sources may be preferred to arguments from less trustworthy sources. In the study, I described preference aggregation technique to resolve conflicting preferences among dialogue participants. Within the proposed frameworks, I developed a number of argument schemes for meta-level reasoning about trust in arguments and their sources that can be integrated into any
dialectical argumentation framework. These argument schemes encode intuitive properties for the dynamic trust ratings of arguments and their sources. I explored the use of dynamic trust ratings of arguments leading to preference orderings over them as a strategy for conflict resolution in argumentation-based dialogues.
Currently, I am working on trust building AI technologies (see ReEnTrust project for details). As a Research Associate with the University of Edinburgh team, my roles involve the system design and implementation of the research tools we use for online experiments, contribution to academic paper writing, research dissemination and data analysis. Specifically, I am involved in the development of sandbox tool we are using for our sandbox studies. In the project, we have designed and implemented prototype experimental tools called Algorithm Playgrounds. The current Algorithm Playground is a merge of different algorithms to demonstrate how e- recruitment algorithms work. The tool incorporates educational material for e-recruitment algorithms using different explanation styles. Using the tool, we are investigating users’ perceptions of fairness, accuracy, transparency, reliability and trustworthiness of e-recruitment systems and how explanations impact such perceptions.
I am also working on the design and implementation of a mediation tool for trust rebuilding in AI technologies. The tool is designed to enhance algorithmic transparency and users’ control of algorithmic decision making process. Using this tool, we will be investigating the impact of algorithmic transparency and enhanced user control of algorithmic decision-making process on trust building/rebuilding in AI technologies.
Combining my PhD and current research, I have worked on a variety of issues relating to the following themes:
- argumentation theories such as abstract argumentation frameworks and structured argumentation frameworks
- argumentation-based agent dialogues
- trust computing and reputation mechanisms
- explainable and responsible AI
- agent communication and language semantics
If you are interested in working with me in any research area relating to my interests, kindly email me (firstname.lastname@example.org)