Interpretable AI Services for Enhanced Air Quality Forecasting

Authors

  • Ketan Shahapure Department of Computer Science and Electrical Engineering University of Maryland Baltimore County, USA Author
  • Samit Shivadekar Student at Harrisburg University of Science and Technology, USA Author
  • Bhrigu Bhargava Student at Harrisburg University of Science and Technology, USA Author

DOI:

https://doi.org/10.32628/IJSRSET2411239

Keywords:

Explainable AI, Machine Learning, Classifi- Cation, Service Oriented Computing

Abstract

Most of the Machine Learning (ML) models used these days establish a complex relationship between the in- dependent variables (X) and dependent variable (y). Without understanding the relationship, we risk introducing undesirable features into the predictions. Biased collection of the data, used to build the model, might bolster these undesirable features. The model might soon become unfit for its intended tasks. This project tries to get deeper insights into such black box machine learning models by looking into various ExplainableAI (XAI) tools and provide it as a service to users. These tools when used in conjunction can make complex models easy to understand and operate for the end-user. Specifically, the tools used would help the user of the machine learning model interact with it and monitor how it behaves on changing certain aspects of the data. To facilitate the better understanding of the achieved outcome, this project uses a weather data-set which is used to classify the air quality.

Downloads

Download data is not yet available.

References

Phil Dickerson, AirNow Program Director (EPA/OAQPS) John E. White,AQI and AirNow 101: How AirNow Works,2018 NAQ Confer- nce, Austin TX

Chen, T., Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 785–794).

New York, NY, USA: ACM. https://doi.org/10.1145/2939672.2939785 DOI: https://doi.org/10.1145/2939672.2939785

https://www.rulex.ai/

http://dalex.drwhy.ai/

A Unified Approach to Interpreting Model Predictions, Scott Lundberg and Su-In Lee, 2017.

Shapley, Lloyd S. (August 21, 1951). ”Notes on the n-Person Game – II: The Value of an n-Person Game” (PDF). Santa Monica, Calif.: RAND Corporation.

https://towardsdatascience.com/shap-explained-the-way-i-wish- someone-explained-it-to-me-ab81cc69ef30

Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models, Daniel W. Apley and Jingyu Zhu,2019.

Why Should I Trust You?”: Explaining the Predictions of Any Classifier, Marco Tulio Ribeiro and Sameer Singh and Carlos Guestrin, 2016.

Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. An- chors: High-precision model-agnostic explanations. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI’18) DOI: https://doi.org/10.1609/aaai.v32i1.11491

Downloads

Published

13-04-2024

Issue

Section

Research Articles

How to Cite

[1]
Ketan Shahapure, Samit Shivadekar, and Bhrigu Bhargava, “Interpretable AI Services for Enhanced Air Quality Forecasting”, Int J Sci Res Sci Eng Technol, vol. 11, no. 2, pp. 260–272, Apr. 2024, doi: 10.32628/IJSRSET2411239.

Similar Articles

1-10 of 127

You may also start an advanced similarity search for this article.