Machine Learning and Reasoning for Reliable and Explainable AI

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 September 2024 | Viewed by 63

Special Issue Editors


E-Mail Website
Guest Editor
1. Associate Professor, School of Computing, University of Portsmouth, Portsmouth PO1 2UP, UK
2. Associate Professor, Faculty of Engineering, Technical University of Sofia, 1756 Sofia, Bulgaria
Interests: future and emerging technologies; computing; computational intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing, University of Portsmouth, Portsmouth PO1 2UP, UK
Interests: fuzzy logic; artificial intelligence; machine learning; AI and ML applications; intelligent transportation systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Currently, artificial intelligence (AI) is one of the technologies with the greatest impact in several areas, with geopolitical, social, and economic implications, among others. However, the increasing complexity of AI models, such as convolutional neural networks and deep learning architectures, gives rise to concerns about their interpretability and explainability. As AI technologies become embedded in decision-making processes, it is crucial to understand and validate the reasoning behind AI-generated outcomes. This necessity has given rise to the concept of explainable AI (XAI), an area of research and development focused on making AI systems more transparent and interpretable.

XAI is the ability of AI systems to provide clear and understandable explanations for their actions and decisions. XAI allows human users to comprehend and trust the results and output created by machine learning (ML) algorithms. Traditional AI models, particularly deep learning algorithms, often operate as black boxes, making it challenging for users to comprehend how these systems arrive at specific outcomes. XAI focuses on developing new approaches for explanations of black-box models by achieving good explainability without sacrificing system performance.

Complementing ML with machine reasoning can make AI more sophisticated. Machine reasoning solves problems by applying human-like common sense to learned data. Machine reasoning is crucial as a complement to ML, as it provides recommendations that are explainable. This allows humans to trace any decisions made back through the process, which increases the auditability and explainability of the system. Explainability in ML can help build trust in ML models and enable the adoption of ML at a larger scale.

In this upcoming Special Issue, we welcome various research articles or reviews on explainable and interpretable ML techniques for various applications. Research topics of interest include (but are not limited to) the following:

  • Human–computer interaction for designing user interfaces for explainability;
  • Causal thinking, reasoning, and modelling;
  • Ethical ML;
  • Causal learning for explainable ML;
  • Transparent, comprehensible, and explainable ML;
  • Reliability analysis of ML models;
  • Interpretability in complex machine learning modelling;
  • Responsible generative AI;
  • Explainable and interpretable AI for classification and non-classification problems (e.g., regression, segmentation, and reinforcement learning);
  • Explainable/interpretable AI for fairness, privacy, and trustworthy models;
  • Novel criteria to evaluate explanation and interpretability;
  • Theoretical foundations of explainable/interpretable AI;
  • Planning under uncertainty;
  • Explainable conversational agents;
  • Explaining black-box models;
  • Hybrid approaches (e.g., Neuro-Fuzzy systems) for XAI;
  • Role of fuzzy knowledge representation in XAI;
  • Role of natural language generation in XAI.

Dr. Raheleh Jafari
Dr. Alexander Gegov
Dr. Farzad Arabikhan
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • explainable AI
  • machine learning
  • artificial intelligence

Published Papers

This special issue is now open for submission.
Back to TopTop