Search this site
Prof. K.P. (Suba) Subbalakshmi
  • Home
  • News
  • Research
    • AI/ML for Mental Health
    • AI/ML for Cyber Safety/Security
    • Keyphrase Extraction
    • Spectrum Aware Mobile Computing
    • Cognitive Radio Networking and Security
  • Publications (chronological)
  • Activities
    • IEEE TAI Special Issue
  • About
Prof. K.P. (Suba) Subbalakshmi
  • Home
  • News
  • Research
    • AI/ML for Mental Health
    • AI/ML for Cyber Safety/Security
    • Keyphrase Extraction
    • Spectrum Aware Mobile Computing
    • Cognitive Radio Networking and Security
  • Publications (chronological)
  • Activities
    • IEEE TAI Special Issue
  • About
  • More
    • Home
    • News
    • Research
      • AI/ML for Mental Health
      • AI/ML for Cyber Safety/Security
      • Keyphrase Extraction
      • Spectrum Aware Mobile Computing
      • Cognitive Radio Networking and Security
    • Publications (chronological)
    • Activities
      • IEEE TAI Special Issue
    • About

Prof. Subbalakshmi and other JS Fellows with Deputy Secretary Blinken, 2017

Latest News


Press coverage

  • https://medium.com/datadriveninvestor/artificial-intelligence-diagnoses-alzheimers-with-near-perfect-accuracy-fb5c7b090c13

  • https://healthitanalytics.com/news/artificial-intelligence-tool-diagnoses-alzheimers-with-95-accuracy

  • https://newatlas.com/medical/ai-tool-alzheimers-disease-language/

  • https://www.techtimes.com/articles/252110/20200828/artificial-intelligence-software-can-detect-alzheimers-at-95-accuracy.htm

  • Explainable AI architecture seems to point to hand-crafted features as being more important in identifying fake news

  • Speech technology could aid in dementia detection, Speech Technology Magazine

  • Check it out: Can app tell truth from fiction? [Video]

  • Gender-spotting tool could have rumbled fake blogger

  • Raging Bull: The Lie Catcher!

Call for Papers

IEEE Transactions on Artificial Intelligence

Special Issue on New Developments in Explainable and Interpretable AI

Motivation and Introduction

Over the years, machine learning (ML) and artificial intelligence (AI) models have steadily grown in complexity, accuracy and other quality metrics, often at the expense of interpretability of the final results. Simultaneously, researchers and practitioners have begun to realize that more transparency in the deep learning and artificial intelligence engines are necessary if the power of these engines should be adopted in practice. For example, having a very good performance metric for a disease predictor is of little use, if it is not possible to give an explanation to the end user (a physician, the patient or even the designer of the tool). Similarly, being able to understand the reasons why a model makes mistakes when it does, can add invaluable insight and is essential in critical applications.

This kind of transparency can be achieved by designing interpretable AI engines which inherently offer a window into the reasoning behind the decisions it arrives at or by designing robust post-hoc methods that can explain the decision of the AI engine.Thus, two areas of research called interpretable AI (IAI) and explainable AI (or XAI), respectively have emerged with the goal to produce models that are both well performing and understandable. Interpretable AI are models that obey some domain specific constraints so that they are better understandable by humans. In essence, they are not black-box models. On the other hand, explainable AI refers to models and methods that are typically used to explain another black-box model.

With the sizable XAI and IAI research community that has formed, there is now a key opportunity to take the field of explainable and interpretable AI to the next level, to overcome the shortcomings of current neural network explanation techniques and extend the related concepts and methods towards more widely applicable, semantically rich and actionable XAI. This special issue aims to bring together these new developments in the fascinating field of interpretable and explainable AI.

Scope of the Special Issue

Original submissions are welcome in the topics including but not limited to:

- Explainable and interpretable AI for classification and non-classification problems (e.g., regression, segmentation, reinforcement learning)

- Explainable and interpretable state-of-the-art neural network architectures (e.g., transformers) and non-neural network models (e.g., trees, kernel-methods, clustering algorithms)

- Explainable/interpretable AI for fairness, privacy, and trustworthy models

- Novel criteria to evaluate explanation and interpretability

- Theoretical foundations of explainable/interpretable AI

- Causal mechanisms for explainable/interpretable AI

- Explainable and Interpretable AI for human-computer interaction

- Explainable and interpretable AI for applications (e.g., medical diagnosis, disaster prediction, credit underwriting, remote sensing, big data)

- Counterfactual explanations

- Human-in-the-loop explanations

Submission Instructions

Three kinds of articles can be submitted to this special issues: (1) Regular (2) Review and (3) Letters.

The special issue will follow the instructions for submission for IEEE TAI including an impact statement. Additionally, the manuscript should contain a “Interpretability/Explainability Evaluation” section. This section will include a quantification of interpretability/explainability of the proposed methods. Examples of interpretability/explainability metrics include sparsity, case-based reasoning etc. If proposing an XAI model, the authors are encouraged to include information on the goals of the explanation. For example, would the explanation provided by the model be an human understandable explanation of the black box or will it provide an approximation of a complex model.

Note that submission will be done via the manuscript central: http://mc.manuscriptcentral.com/tai-ieee

Please select the appropriate special issue when submitting.

Important Dates

Submission deadlines: June 1, 2022, July 1, 2022

First round of reviews due: September 15, 2022

Revised manuscripts due: October 15, 2022

Final decision: December 15, 2022


Guest Editors

K.P. (Suba) Subbalakshmi, Stevens Institute of Technology, USA

Wojciech Samek, Fraunhofer Heinrich Hertz Institute HHI, Germany

Xia “Ben” Hu, Rice University, USA





Report abuse
Page details
Page updated
Google Sites
Report abuse