I am interested in AI ethics, privacy, and cybersecurity research. I am also interested in all aspects of agreement technologies such as computational trust, argumentation, and reputation models, and how these models can be used to minimize inherent inconsistencies in the interactions among autonomous agents in open multi-agent systems. Specifically, I am interested in normative, descriptive, and reasoning models of communication, coordination, collaboration, and dialogue among artificial and human agents with a view to resolving conflicts of opinions and making informed and reliable decisions with the information obtained.
In my Ph.D., I studied how trust evolves in a multi-party dialogue and how such trust affects the justified conclusions obtained from the dialogue. In particular, I investigated the question of how a dialogue participant can assign trust ratings to other dialogue participants and their arguments in a dialogue and use the trust ratings as a basis to define its preferences over the arguments and draw justifiable conclusions, given that arguments (or information that supports the arguments) from more trustworthy sources may be preferred to arguments from less trustworthy sources. In the study, I described a preference aggregation technique to resolve conflicting preferences among dialogue participants. Within the proposed frameworks, I developed a number of argument schemes for meta-level reasoning about trust in arguments and their sources that can be integrated into any dialectical argumentation framework. These argument schemes encode intuitive properties for the dynamic trust ratings of arguments and their sources. I explored the use of dynamic trust ratings of arguments leading to preference orderings over them as a strategy for conflict resolution in argumentation-based dialogues.
Currently, at PETRAS University College London, I am working on investigating i) the general and common ethical principles for the converged AI and IoT technologies, ii) the users’ perceived ethical and moral issues with AI and IoT iii) the perceptions of AI/IoT researchers, developers and end users about existing AI and IoT ethical principles and iv) the challenges of adopting and implementing ethics in AI and IoT.
As a Research Associate at the Artificial Intelligence and its Applications Institute (AIAI), University of Edinburgh, I worked on investigating the role of contexts in managing privacy online (see CONTEXT project for details). On the CONTEXT project, my roles involve the development of a formal representation of privacy contexts by extending state-of-the-art ontologies, modeling use-case scenarios with the proposed formal representation, the design and implementing the prototype agent-based Privacy Enhancing Technology (PET) that will act on behalf of an individual to preserve privacy, and contribution to academic paper writing, research dissemination and data analysis. I also worked on trust-building AI technologies (see ReEnTrust project for details). On the ReEnTrust project, my roles involve the system design and implementation of the research tools we use for online experiments, contribution to academic paper writing, research dissemination, and data analysis. Specifically, I was involved in the development of the sandbox tool we are using for our sandbox studies. In the project, we designed and implemented prototype experimental tools called Algorithm Playgrounds. The Algorithm Playground is a merge of different algorithms to demonstrate how e- recruitment algorithms work. The tool incorporates educational material for e-recruitment algorithms using different explanation styles. Using the tool, we investigated users’ perceptions of fairness, accuracy, transparency, reliability and trustworthiness of e-recruitment systems and how explanations impact such perceptions.
Combining my Ph.D. and current research, I have worked on a variety of issues relating to the following themes:
- argumentation theories such as abstract argumentation frameworks and structured argumentation frameworks
- argumentation-based agent dialogues
- trust computing and reputation mechanisms
- explainable and responsible AI
- agent communication and language semantics
- AI ethics
- Privacy and Cybersecurity
If you are interested in working with me in any research area relating to my interests, kindly email me (g.ogunniye@ucl.ac.uk)