Interpretable AI Services for Enhanced Air Quality Forecasting
DOI:
https://doi.org/10.32628/IJSRSET2411239Keywords:
Explainable AI, Machine Learning, Classifi- Cation, Service Oriented ComputingAbstract
Most of the Machine Learning (ML) models used these days establish a complex relationship between the in- dependent variables (X) and dependent variable (y). Without understanding the relationship, we risk introducing undesirable features into the predictions. Biased collection of the data, used to build the model, might bolster these undesirable features. The model might soon become unfit for its intended tasks. This project tries to get deeper insights into such black box machine learning models by looking into various ExplainableAI (XAI) tools and provide it as a service to users. These tools when used in conjunction can make complex models easy to understand and operate for the end-user. Specifically, the tools used would help the user of the machine learning model interact with it and monitor how it behaves on changing certain aspects of the data. To facilitate the better understanding of the achieved outcome, this project uses a weather data-set which is used to classify the air quality.
Downloads
References
Phil Dickerson, AirNow Program Director (EPA/OAQPS) John E. White,AQI and AirNow 101: How AirNow Works,2018 NAQ Confer- nce, Austin TX
Chen, T., Guestrin, C. (2016). XGBoost: A Scalable Tree Boosting System. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 785–794).
New York, NY, USA: ACM. https://doi.org/10.1145/2939672.2939785 DOI: https://doi.org/10.1145/2939672.2939785
A Unified Approach to Interpreting Model Predictions, Scott Lundberg and Su-In Lee, 2017.
Shapley, Lloyd S. (August 21, 1951). ”Notes on the n-Person Game – II: The Value of an n-Person Game” (PDF). Santa Monica, Calif.: RAND Corporation.
https://towardsdatascience.com/shap-explained-the-way-i-wish- someone-explained-it-to-me-ab81cc69ef30
Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models, Daniel W. Apley and Jingyu Zhu,2019.
Why Should I Trust You?”: Explaining the Predictions of Any Classifier, Marco Tulio Ribeiro and Sameer Singh and Carlos Guestrin, 2016.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. An- chors: High-precision model-agnostic explanations. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI’18) DOI: https://doi.org/10.1609/aaai.v32i1.11491
Downloads
Published
Issue
Section
License
Copyright (c) 2024 International Journal of Scientific Research in Science, Engineering and Technology
This work is licensed under a Creative Commons Attribution 4.0 International License.