INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIII, Issue V, May 2024
www.ijltemas.in Page 232
Exploring the Role of Explainable AI in Compliance Models for
Fraud Prevention
Chiamaka Daniella Okenwa., Omoyin Damilola. David, Adeyinka Orelaja.,*Oladayo Tosin
Akinwande
Veritas University, Abuja, Nigeria
*Corresponding author
DOI : https://doi.org/10.51583/IJLTEMAS.2024.130524
Received: 01 June 2024; Accepted: 05 June 2024; Published: 27 June 2024
Abstract: Integration of explainable Artificial Intelligence (XAI) methodologies into compliance frameworks represents
a considerable potential for augmenting fraud prevention strategies across diverse sectors. This paper explores the role of
explainable AI in compliance models for fraud prevention. In highly regulated sectors like finance, healthcare, and cybersecurity,
XAI helps identify abnormal behaviour and ensure regulatory compliance by offering visible and comprehensible insights into
AI-driven decision-making processes. The findings indicate the extent to which XAI can improve the efficacy, interpretability,
and transparency of initiatives aimed at preventing fraud. Stakeholders can comprehend judgements made by AI, spot fraudulent
tendencies, and rank risk-reduction tactics using XAI methodologies. In addition, it also emphasizes how crucial interdisciplinary
collaboration is to the advancement of XAI and its incorporation into compliance models for fraud detection across multiple
sectors. In conclusion, XAI in compliance models plays a vital role in fraud prevention. Therefore, through the utilization of
transparent and interpretable AI tools, entities can strengthen their ability to withstand fraudulent operations, build trust among
stakeholders, and maintain principles within evolving regulatory systems.
Keywords Artificial intelligence, Explainable AI, Interpretability, Explanations, Machine learning, Fraud security
I. Introduction
Applications of artificial intelligence (AI) have attracted a lot of attention in the field of research in recent years and have
impacted almost every aspect of our lives, promoting automation and innovation in a wide range of industries (Xu et al., 2021).
Currently, AI is a disruptive technology that may impact corporate operations considerably. Previous research has investigated
the role of AI in improving business innovation modifying business processes, examining customer requirements, and utilizing
data analytics for the provision of more accurate and improved managerial decision-making (Wamba-Taguimdje et al., 2020; Al-
Anqoudi et al., 2021; Zhou et al., 2020; Gupta et al., 2022; Oladele et al., 2024).
Although numerous studies have shown the potential and substantial benefits of AI applications in business operations, it is
nonetheless unclear for predicting the extent to which explainable AI (XAI) will affect organizations (Enholm et al., 2021). The
goal of the developing field of XAI is to increase the transparency and comprehensibility of AI systems for human users (Ali et
al., 2023). In this regard, XAI focuses on delivering precise and understandable explanations for the choices and results of AI
models, especially in high-stakes situations where responsibility and trust are essential. Transparent and interpretable models are
becoming more and more essential as AI technologies are incorporated into vital fields like healthcare, banking, criminal justice,
and national security (Kumar et al., 2024). XAI addresses the crucial requirement for interpretability and transparency in machine
learning models. However, XAI is important because it helps promote social acceptability, accountability, and trust in AI
technologies. Therefore, AI systems must not only provide precise predictions but also justify their conceptualizations in a way
that is comprehensible to domain experts and end users in situations where decisions might have significant effects on users.
Specifically in delicate areas like fraud prevention and compliance, transparency, and interpretability are fundamental concepts in
the creation and application of AI models (Odeyemi et al., 2024). XAI approaches, for example, make it possible for regulators
and investigators to examine model predictions, comprehend the reasoning behind transactions or activities that have been
highlighted, and find possible signs of fraudulent activity when it comes to fraud detection (Confalonieri et al., 2019). As such,
XAI gives stakeholders the capacity to evaluate the fairness and dependability of AI-driven fraud detection systems by
illuminating the characteristics and patterns that underpin model predictions.
Deep neural networks and other complicated machine learning models have become increasingly popular, and their "black box"
nature has sparked concerns about how quickly these systems are being adopted (Gupta et al., 2021). These models can perform
remarkably well, but because of their frequently opaque inner workings, it can be challenging for users to comprehend how they
make decisions. Hence becoming problematic in delicate fields where decisions can have a big impact on people. In general, the
emergence of XAI represents a significant advancement in AI development since it seeks to increase the reliability,
accountability, and accessibility of these potent technologies for a variety of users and applications. This review investigates the
role of XAI in compliance models for fraud prevention.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIII, Issue V, May 2024
www.ijltemas.in Page 233
II. A Brief History of XAI
The first research on the concept of XAI occurred forty years ago when certain expert systems used the applied rules to explain
their outcomes (Scott et al., 1977; Swartout, 1981). Ever since AI research commenced, researchers have contended that
intelligent systems ought to elucidate the outcomes of AI, particularly as it relates to decision-making. For instance, if a rule-
based expert system declines a credit card payment, it ought to explain its decision. Knowledge expert systems and rules are
easily interpreted and understood by people since they are created and defined by human experts. The decision tree is a
commonly used approach that has an explainable structure (Xu et al., 2019). Nonetheless, within the framework of contemporary
deep learning, XAI has emerged as a novel area of study.
Interpretability was frequently an implicit feature of straightforward rule-based systems and expert systems in the early phases of
AI research. These systems depended on transparent decision criteria that were simple for human specialists to comprehend and
verify (Tursunalieva A. et al., 2024). However, black box models which put predicted accuracy ahead of interpretability were
introduced with the development of machine learning algorithms in the 20th century with the introduction of neural networks and
decision trees. This change made it harder to comprehend and believe AI-generated results, particularly in industries with high
stakes like banking and healthcare (Hassija V. et al., 2023). The development of XAI tools to clarify model predictions and
decision-making processes was prompted by the increasing realization of the constraints associated with black box models. Early
methods included feature significance analysis, which prioritized features according to their value to predictions, and sensitivity
analysis, which assessed the effect of input features on model outputs (Wang et al., 2000).
Model-agnostic techniques like LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive
exPlanations), which offer local and global explanations for black-box models, have been added to the repertoire of XAI
techniques over time by researchers (Messalas et al., 2019). Furthermore, because of their intrinsic transparency and
understandability, interpretable model architectures including decision trees, rule-based systems, and sparse linear models have
become more popular (Tursunalieva A. et al., 2024). Interdisciplinary cooperation involving academics in computer science,
cognitive psychology, human-computer interaction, and ethics has aided in the development of XAI. The development of XAI
approaches and frameworks that are suited to a variety of user needs and preferences has been enhanced by this convergence of
expertise (Ali et al., 2023). Future research initiatives aimed at addressing scalability, robustness, and domain-specific issues
should propel the development of XAI. XAI can democratize AI technology and enable people to make wise decisions by
promoting a symbiotic interaction between humans and AI systems (Saeed & Omlin, 2023).
III. Fraud Prevention Within Compliance Models
A vital component of risk management for organizations in a variety of industries is fraud prevention within compliance models.
These models comprise a collection of procedures, roles, and policies intended to discourage, identify, and lessen fraudulent
activity (Odeyemi et al., 2024). They also guarantee compliance with industry standards and legal requirements, which are vital
for protecting businesses from financial losses, harm to their reputations, and legal ramifications.
Strict guidelines are established by regulatory agencies and governing bodies in the field of fraud detection to guarantee the safety
and equity of financial transactions. These laws require businesses to use machine learning models that are transparent and
understandable. To demonstrate compliance with regulatory norms, a fraud detection model must be able to clearly explain how it
comes to its conclusions (Rane et al., 2023). Transparent organizations are better able to comply with legal frameworks and
communicate with regulatory agencies, which makes it easier to get approval for implementing and maintaining fraud detection
systems. Even though compliance models are crucial for preventing fraud, there are still many obstacles that organizations must
overcome to successfully identify and stop fraud within legal boundaries (Hilal et al., 2021). The dynamic nature of fraud
schemes poses a significant problem, necessitating ongoing modifications to training curricula and fraud prevention tactics to stay
up to speed with new risks. Furthermore, to effectively identify and respond to fraudulent actions, the complexity of fraud
detection, particularly in financial institutions and government programs, necessitates the use of rigorous detection procedures
such as thorough document scrutiny and auditing.
Recognizing and Rectifying Biases in Models
According to Max et al., (2021), biases in the data used to train machine-learning models can potentially exist. Biased models in
fraud detection can operate unfairly or disproportionately affect some demographic groups. When it comes to recognizing and
correcting these biases, explainability becomes an essential tool. Through the provision of information on the salient
characteristics that impact model decisions, interested parties can evaluate if the model is unintentionally favouring some groups
over others. In addition to being morally required, addressing these biases complies with legal requirements that support equality
and nondiscrimination in algorithmic decision-making (Fritz-Morgenthal et al., 2022). Overall, there are several reasons why
explainability is important for fraud prevention. Transparency is necessary for regulatory compliance, stakeholder trust depends
on stakeholders' ability to comprehend and interpret model judgements, and knowing the inner workings of these models is
necessary for the identification and mitigation of biases. The incorporation of explainability becomes essential for responsible and
efficient deployment in a variety of businesses as fraud detection technology advances.
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIII, Issue V, May 2024
www.ijltemas.in Page 234
Concepts of Models
Explainability is a model's ability to make its decision-making process understandable to stakeholders, users, and regulatory
agencies. It entails disclosing the internal workings, characteristics, and inputs that support the model's predictions to provide a
clearer understanding of how the model arrives at particular conclusions. However, interpretability goes a step further and
concentrates on providing explanations in a way that people who might not be professionals in data science or machine learning
can understand and use. An interpretable model helps non-technical stakeholders make informed decisions by offering insights
that are understandable and relevant to them (Arrieta et al., 2020; McWaters, 2019).
Furthermore, Ribeiro et al., (2016) stated that techniques known as model-agnostic interpretability can be used with any machine
learning model, depending on its complexity and underlying architecture. These techniques are useful for evaluating and
contrasting various models because they provide a broad grasp of model behaviour and decision considerations. Whereas, model-
specific interpretability makes use of a model's special features, structures, and parameters to cater to the distinct qualities of a
certain kind of model. Model-specific approaches are restricted to that particular model type, even though they could offer more
subtle insights into a certain model's decision-making (Hassija V. et al., 2023).
Explaining a model's predictions for a particular case or group of occurrences is the main goal of local interpretability. It offers
case-by-case transparency by illuminating the rationale behind a specific prediction (Tursunalieva A. et al., 2024). Unlike global
interpretability in which its feature offers a broader perspective on how the model behaves throughout the whole dataset. It seeks
to identify broad patterns, trends, and significant elements that influence the model's judgements on a larger scale.
Comprehending these fundamental ideas is necessary to develop and apply practical methods that improve the interpretability and
explainability of fraud detection models. It is essential to strike a balance between these elements to create models that are
transparent, intelligible, and accurate for a range of stakeholders (Marcinkevičs & Vogt, 2023).
IV. Effectiveness of XAI in Identifying and Preventing Anomalies
A large number of studies are theoretical, with only a small number of articles discussing XAI's real-world applications.
Dhanorkar S. et al., (2021) discovered that rather than being a static feature of a model, explanations are dynamic, iterative, and
emergent. Certain aspects of the application of XAI as described in various literature as summarized in Table 1. Research on a
strategy for using XAI in the financial services industry includes Using the quantitative input influence method by Bracke et al.,
(2019) to provide an explainability strategy for predicting mortgage defaults. An XAI model for fintech risk management is put
forth by Bussmann et al., (2020). Qadi et al., (2021) benchmarked various machine learning models that were enhanced by SHAP
with an emphasis on the credit score of businesses. LIME and SHAP were used for machine learning-based credit scoring models
by Misheva et al., (2021).
Table 1: Classification of the XAI Features
Classification
Meaning
References
AI
AI function and effects
(Meske et al., 2020 ; Dhanorkar
S. et al., 2021)
Overall XAI
Overarching guidelines, values, and
methods for XAI operations
(Mohseni et al., 2021; Leslie,
2019)
Transparency and
explainability
The function and significance of
explainability and transparency
(Miller et al., 2017 ; Koster et
al., 2021)
XAI system
Objective and method of the XAI system
(Mohseni et al., 2021; Leslie,
2019)
XAI techniques and
methods
XAI strategies and tactics
Utilize case Strategies and tactics for
creating the XAI system
(Morley et al., 2021; Schwalbe
& Finzel, 2023)
Through the provision of a window into the AI models' decision-making process, XAI approaches enable analysts to decipher and
verify model predictions. XAI helps stakeholders recognize unusual occurrences and comprehend the reasoning behind them by
offering explanations for specific predictions (Ali et al., 2023). The identification of anomalous patterns is facilitated by local
interpretability techniques like SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic
Explanations), which emphasize the significance of each characteristic to model predictions (Antwarg et al., 2021).
Systematically measuring the impact of input features on model predictions, XAI techniques like feature significance analysis
help analysts pinpoint the factors causing abnormal behaviour. XAI makes it easier to prioritize the variables causing anomalies
by assigning characteristics a priority depending on their significance (Vivian W.-M. Lai et al., 2020). In addition, by identifying
crucial factors linked to anomalous occurrences, feature importance analysis is a useful technique for anomaly detection in a
variety of fields, such as cybersecurity, fraud detection, and predictive maintenance (Pinto & Sobreiro, 2022).
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIII, Issue V, May 2024
www.ijltemas.in Page 235
V. Practical Applications of XAI in Fraud Prevention
AI has shown to be a highly effective tool to prevent fraud in the financial services industry, as seen by the numerous successful
cases of fraud thwarted by AI. The practical applications of incorporating AI into fraud protection include noteworthy reductions
in false positives and negatives, offering insightful insights to enterprises employing these cutting-edge technologies (Akindote et
al., 2023). The potential of AI to effectively prevent a variety of fraudulent actions is one of the technology's most intriguing real-
world applications in the field of fraud detection. AI systems use sophisticated algorithms, machine learning, and pattern
recognition to analyze enormous volumes of data in real time, allowing them to spot intricate and subtle patterns that point to
fraudulent activity.
Specifically, AI algorithms can identify irregularities in transaction patterns, which may indicate fraudulent activities including
identity theft, unauthorized access, or fraudulent transactions. Financial institutions have a strong tool to fight more complex
fraud schemes using the capacity of XAI models to adapt and learn from new data, which guarantees that they can remain ahead
of changing fraud strategies (Mohanty et al., 2023). Case studies from the banking sector have shown situations in which fraud
detection systems driven by AI have stopped large financial losses. These achievements highlight the usefulness of AI in spotting
fraudulent tendencies that conventional approaches could miss, protecting the financial integrity of organizations and their clients.
Furthermore, interpretable models for fraud detection have been successfully adopted by several financial organizations (Zhu et
al., 2021). These models improve transparency by fusing rule-based systems with feature importance analysis, which helps
investigators comprehend and validate transactions that have been highlighted. However, explainable models are used by e-
commerce businesses to identify fraudulent activity that occurs during online transactions. Particularly, LIME and SHAP have
been used to give local interpretability, which enables analysts to comprehend the reasons for the flagging of particular
transactions as possibly fraudulent (Lin & Gao, 2022). According to Jiang et al., (2022), organizations that employ interpretable
fraud detection models reported improvements in stakeholder trust and good user experience from explainability models.
Therefore, increased user confidence in the system's capacity to detect and prevent fraud contributes to a more seamless and
acceptable implementation.
VI. Ethical Considerations in AI-driven Fraud Prevention
Financial services fraud prevention has greatly benefited from AI, but ethical issues must be addressed to maintain justice,
openness, and regulatory compliance (Max et al., 2021). The integration of AI-driven systems in fraud detection necessitates
resolving biases, guaranteeing openness in model operations, and complying with regulatory frameworks. An important ethical
factor in AI-driven fraud prevention is the possibility of algorithmic biases. AI algorithms make decisions based on previous data,
and if that data has biases, the model may reinforce and even magnify such prejudices (Gichoya et al., 2023). For example, the AI
model may unintentionally absorb and perpetuate biases against specific demographics, such as age, gender, or ethnicity, if
previous data reveals such biases.
Organizations must put policies in place to identify and lessen biases when developing and deploying AI models (Odeyemi et al.,
2024). To do this, models must be routinely audited for bias, training data must be representative and diverse, and fairness
measures must be included to evaluate the model's effects on various demographic groups. Furthermore, biases that could develop
over time must be minimized and corrected by constant observation and improvement. One of the most important ethical factors
in AI-driven fraud prevention is transparency. Numerous AI models, particularly intricate ones like neural networks, tend to
function as black boxes, making it difficult to comprehend the logic behind their judgements (Buhrmester et al., 2021).
Challenges concerning accountability and the capacity to explain model outputs are raised by this lack of transparency,
particularly when making important financial decisions. Transparency must be prioritized by organizations through the use of
XAI methodologies. In this regard, explainable models offer valuable perspectives into decision-making processes, facilitating
regulators and consumers alike in comprehending the variables impacting fraud detection results (Fritz-Morgenthal et al., 2022).
Transparent AI encourages the ethical use of AI in fraud prevention by improving accountability and building trust among users
and stakeholders. Regulatory compliance therefore becomes increasingly important as AI gets more and more involved in
preventing fraud. Several laws on fair lending practices, consumer protection, and data privacy apply to financial services. To
ensure ethical and legal use, AI-based fraud detection systems have to comply with these standards.
Ethical issues in AI-driven fraud prevention are critical to ensuring equity, openness, and adherence to legal requirements. The
appropriate application of AI in fraud detection involves addressing biases, maintaining transparency in model operations, and
abiding by regulatory frameworks. Corporations that place a high priority on ethical issues not only reduce the risks of biassed
decision-making but also increase consumer and regulatory trust, resulting in a more moral and long-lasting environment for AI
in financial services (Shneiderman, 2020; Zhao & Gómez Fariñas, 2022)
VII. Conclusion
Artificial intelligence (AI) is expected to play a more significant role in fraud prevention and detection as the financial services
sector continues to struggle with more complex fraud threats (Hassan et al., 2023). Examining new technologies, advances in
XAI, federated learning, and cooperative efforts between regulatory agencies and financial institutions are some of the ways to
predict future trends and improvements in this field. AI-driven fraud detection systems are incorporating sophisticated biometric
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIII, Issue V, May 2024
www.ijltemas.in Page 236
authentication techniques through the use of emerging technologies. By individually identifying people based on their physical
characteristics and behavioural patterns, biometrics such as facial recognition, fingerprint scanning, and behavioural biometrics
offer an extra degree of protection (Dargan and Kumar, 2020). Artificial intelligence algorithms examine these biometric traits
instantly to identify and stop fraudulent or unauthorized activity.
It is anticipated that future advancements in XAI would concentrate on improving the interpretability and usability of AI models.
This includes creating interactive dashboards, visualization tools, and clearer justifications for intricate AI judgements. According
to Díaz-Rodríguez et al. (2023), financial institutions will place a higher priority on XAI to comply with regulatory requirements,
resolve ethical problems, and increase end-user trust.
It is clear from examining XAI's role in compliance models for fraud prevention in the financial services industry that AI has
ushered in a revolutionary period in the efforts against financial crimes. By incorporating XAI approaches into compliance
models, fraud prevention measures become more transparent and comprehensible, making it easier for stakeholders to
comprehend AI-driven choices and recognize unusual behaviour. Offering insights into model predictions, measuring feature
relevance, spotting outlier patterns, and incorporating domain expertise, enables XAI to improve the identification of fraudulent
activity. Organizations must adhere to industry standards and regulatory guidelines to reduce the risk of fraud, safeguard the
interests of stakeholders, and uphold legal and ethical compliance. Therefore, it is recommended that future studies concentrate
on resolving issues with scalability, robustness, and integration with current systems that arise when implementing XAI in
compliance models.
References
1. Akindote, O. J., Abimbola Oluwatoyin Adegbite, Samuel Onimisi Dawodu, Adedolapo Omotosho, Anthony Anyanwu,
& Chinedu Paschal Maduka. (2023). Comparative review of big data analytics and GIS in healthcare decision-making.
World Journal of Advanced Research and Reviews, 20(3), 12931302. https://doi.org/10.30574/wjarr.2023.20.3.2589
2. Al-Anqoudi, Y., Al-Hamdani, A., Al-Badawi, M., & Hedjam, R. (2021). Using Machine Learning in Business Process
Re-Engineering. Big Data and Cognitive Computing, 5(4), 61. https://doi.org/10.3390/bdcc5040061
3. Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Ser, J. D.,
Díaz-Rodríguez, N., & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to
attain Trustworthy Artificial Intelligence. Information Fusion, 99(101805), 101805. sciencedirect.
https://doi.org/10.1016/j.inffus.2023.101805
4. Antwarg, L., Miller, R. M., Shapira, B., & Rokach, L. (2021). Explaining anomalies detected by autoencoders using
Shapley Additive Explanations. Expert Systems with Applications, 186, 115736.
https://doi.org/10.1016/j.eswa.2021.115736
5. Arrieta, B. A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina,
D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI. Information Fusion, 58(1), 82115.
https://arxiv.org/pdf/1910.10045.pdf
6. Bracke, P., Datta, A., Jung, C., & Sen, S. (2019). Machine Learning Explainability in Finance: An Application to
Default Risk Analysis. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3435104
7. Buhrmester, V., nch, D., & Arens, M. (2021). Analysis of Explainers of Black Box Deep Neural Networks for
Computer Vision: A Survey. Machine Learning and Knowledge Extraction, 3(4), 966989.
https://doi.org/10.3390/make3040048
8. Bussmann, N., Giudici, P., Marinelli, D., & Papenbrock, J. (2020). Explainable AI in Fintech Risk Management.
Frontiers in Artificial Intelligence, 3. https://doi.org/10.3389/frai.2020.00026
9. Confalonieri, R., Prado, del, Sebastia Agramunt, Malagarriga, D., Faggion, D., Tillman Weyde, & Besold, T. R. (2019).
An Ontology-based Approach to Explaining Artificial Neural Networks. ArXiv (Cornell University).
10. Dargan, S., & Kumar, M. (2020). A comprehensive survey on the biometric recognition systems based on physiological
and behavioral modalities. Expert Systems with Applications, 143, 113114. https://doi.org/10.1016/j.eswa.2019.113114
11. Dhanorkar S., Wolf, C. T., Qian, K., Xu, A., Popa, L., & Li, Y. (2021). Who needs to know what, when? Broadening the
Explainable AI (XAI) Design Space by Looking at Explanations Across the AI Lifecycle. Designing Interactive Systems
Conference 2021. https://doi.org/10.1145/3461778.3462131
12. Díaz-Rodríguez, N., Del Ser, J., Coeckelbergh, M., López de Prado, M., Herrera-Viedma, E., & Herrera, F. (2023).
Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to
responsible AI systems and regulation. Information Fusion, 99, 101896. https://doi.org/10.1016/j.inffus.2023.101896
13. Enholm, I. M., Papagiannidis, E., Mikalef, P., & Krogstie, J. (2021). Artificial Intelligence and Business Value: a
Literature Review. Information Systems Frontiers, 24(5), 17091734. https://doi.org/10.1007/s10796-021-10186-w
14. Fritz-Morgenthal, S., Hein, B., & Papenbrock, J. (2022). Financial Risk Management and Explainable, Trustworthy,
Responsible AI. Frontiers in Artificial Intelligence, 5(1). https://doi.org/10.3389/frai.2022.779799
15. Gichoya J. W., Thomas, K. J., Leo Anthony Celi, Safdar, N. M., Banerjee, I., Banja, J. D., Laleh Seyyed-Kalantari,
Trivedi, H., & Saptarshi Purkayastha. (2023). AI pitfalls and what not to do: Mitigating bias in AI. British Journal of
Radiology, 96(1150). https://doi.org/10.1259/bjr.20230023
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIII, Issue V, May 2024
www.ijltemas.in Page 237
16. Gupta, S., & Gupta, B. (2022). Insights into the Black Box Machine Learning Models Through Explainability and
Interpretability. Lecture Notes in Networks and Systems, 633644. https://doi.org/10.1007/978-981-16-9967-2_59
17. Gupta, S., Modgil, S., Bhattacharyya, S., & Bose, I. (2021). Artificial intelligence for decision support systems in the
field of operations research: review and future scope of research. Annals of Operations Research.
https://doi.org/10.1007/s10479-020-03856-6
18. Hassan, A. O., Ewuga, S. K., Abdul, A. A., Abrahams, T. O., Oladeinde, M., & Dawodu, S. O. (2024).
CYBERSECURITY IN BANKING: A GLOBAL PERSPECTIVE WITH A FOCUS ON NIGERIAN PRACTICES.
Computer Science & IT Research Journal, 5(1), 4159. https://doi.org/10.51594/csitrj.v5i1.701
19. Hassija V., Vinay Chamola, Mahapatra, A., Singal, A., Goel, D., Huang, K., Scardapane, S., Spinelli, I., Mahmud, M., &
Hussain, A. (2023). Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognitive
Computation, 16. https://doi.org/10.1007/s12559-023-10179-8
20. Hilal, W., Andrew Gadsden, S., & Yawney, J. (2021). A Review of Anomaly Detection Techniques and Applications in
Financial Fraud. Expert Systems with Applications, 193(1), 116429. https://doi.org/10.1016/j.eswa.2021.116429
21. Jiang, J., Kahai, S., & Yang, M. (2022). Who needs explanation and when? Juggling explainable AI and user epistemic
uncertainty. International Journal of Human-Computer Studies, 165, 102839. https://doi.org/10.1016/j.ijhcs.2022.102839
22. Koster, O., Kosman, R., & Visser, J. (2021). A Checklist for Explainable AI in the Insurance Domain. Communications
in Computer and Information Science, 446456. https://doi.org/10.1007/978-3-030-85347-1_32
23. Kumar, J. R. R., Kalnawat, A., Pawar, A. M., Jadhav, V. D., Srilatha, P., & Khetani, V. (2024). Transparency in
Algorithmic Decision-making: Interpretable Models for Ethical Accountability. E3S Web of Conferences, 491, 02041.
https://doi.org/10.1051/e3sconf/202449102041
24. Leslie, D. (2019). Understanding artificial intelligence ethics and safety A guide for the responsible design and
implementation of AI systems in the public sector Dr David Leslie Public Policy Programme. Understanding Artificial
Intelligence Ethics and Safety. https://doi.org/10.5281/zenodo.3240529
25. Lin, K., & Gao, Y. (2022). Model interpretability of financial fraud detection by group SHAP. Expert Systems with
Applications, 210, 118354. https://doi.org/10.1016/j.eswa.2022.118354
26. Marcinkevičs, R., & Vogt, J. E. (2023). Interpretable and explainable machine learning: A methods‐centric overview
with concrete examples. WIREs Data Mining and Knowledge Discovery. https://doi.org/10.1002/widm.1493
27. Max, R., Kriebitz, A., & Von Websky, C. (2021). Ethical Considerations About the Implications of Artificial
Intelligence in Finance. International Handbooks in Business Ethics, 577592. https://doi.org/10.1007/978-3-030-29371-
0_21
28. McWaters, R. J. (2019, October 23). Navigating Uncharted Waters: A roadmap to responsible innovation with AI in
financial services. World Economic Forum. https://www.weforum.org/publications/navigating-uncharted-waters-a-
roadmap-to-responsible-innovation-with-ai-in-financial-services/
29. Meske, C., Bunde, E., Schneider, J., & Gersch, M. (2020). Explainable Artificial Intelligence: Objectives, Stakeholders,
and Future Research Opportunities. Information Systems Management, 39(1), 111.
https://doi.org/10.1080/10580530.2020.1849465
30. Messalas, A., Kanellopoulos, Y., & Makris, C. (2019, July 1). Model-Agnostic Interpretability with Shapley Values.
IEEE Xplore. https://doi.org/10.1109/IISA.2019.8900669
31. Miller, T., Howe, P., & Sonenberg, L. (2017, December 4). Explainable AI: Beware of Inmates Running the Asylum Or:
How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. ArXiv.org.
https://doi.org/10.48550/arXiv.1712.00547
32. Misheva, B. H., Osterrieder, J., Hirsa, A., Kulkarni, O., & Lin, S. F. (2021). Explainable AI in Credit Risk Management.
Arxiv.org. https://doi.org/10.48550/arXiv.2103.00949
33. Mohanty, B., Manipal, A., & Mishra, S. (2023). ROLE OF ARTIFICIAL INTELLIGENCE IN FINANCIAL FRAUD
DETECTION. Academy of Marketing Studies Journal, 27(1). https://www.abacademies.org/articles/role-of-artificial-
intelligence-in-financial-fraud-detection.pdf
34. Mohseni, S., Zarei, N., & Ragan, E. D. (2021). A Multidisciplinary Survey and Framework for Design and Evaluation of
Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems, 11(3-4), 145.
https://doi.org/10.1145/3387166
35. Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2021). From What to How: An Initial Review of Publicly Available AI
Ethics Tools, Methods and Research to Translate Principles into Practices. Philosophical Studies Series, 153183.
https://doi.org/10.1007/978-3-030-81907-1_10
36. Odeyemi, O., Noluthando Zamanjomane Mhlongo, Ekene Ezinwa Nwankwo, & Oluwatobi Timothy Soyombo. (2024).
Reviewing the role of AI in fraud detection and prevention in financial services. International Journal of Science and
Research Archive, 11(1), 21012110. https://doi.org/10.30574/ijsra.2024.11.1.0279
37. Oladele I., Orelaja A., & Akinwande O. T. (2024). Ethical Implications and Governance of Artificial Intelligence in
Business Decisions: A Deep Dive into the Ethical Challenges and Governance Issues Surrounding the Use of Artificial
Intelligence in Making Critical Business Decisions. International Journal of Latest Technology in Engineering
Management & Applied Science, XIII(II), 4856. https://doi.org/10.51583/ijltemas.2024.130207
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIII, Issue V, May 2024
www.ijltemas.in Page 238
38. Pinto, S. O., & Sobreiro, V. A. (2022). Literature review: Anomaly detection approaches on digital business financial
systems. Digital Business, 100038. https://doi.org/10.1016/j.digbus.2022.100038
39. Qadi A. E., Diaz-Rodriguez, N., Trocan, M., & Frossard, T. (2021). Explaining Credit Risk Scoring through Feature
Contribution Alignment with Expert Risk Analysts. ArXiv (Cornell University).
https://doi.org/10.48550/arxiv.2103.08359
40. Rane, N., Choudhary, S., & Rane, J. (2023). Explainable Artificial Intelligence (XAI) approaches for transparency and
accountability in financial decision-making. Social Science Research Network. https://doi.org/10.2139/ssrn.4640316
41. Ribeiro M. T., Singh, S., & Guestrin, C. (2016). Model-Agnostic Interpretability of Machine Learning. ArXiv (Cornell
University). https://doi.org/10.48550/arxiv.1606.05386
42. Saeed, W., & Omlin, C. (2023). Explainable AI (XAI): A systematic meta-survey of current challenges and future
opportunities. Knowledge-Based Systems, 263, 110273. https://doi.org/10.1016/j.knosys.2023.110273
43. Schwalbe, G., & Finzel, B. (2023). A comprehensive taxonomy for explainable artificial intelligence: a systematic
survey of surveys on methods and concepts. Data Mining and Knowledge Discovery. https://doi.org/10.1007/s10618-
022-00867-8
44. Scott, A. C., Clancey, W. J., Davis, R., & Shortliffe, E. H. (1977). Explanation Capabilities of Production-Based
Consultation Systems. American Journal of Computational Linguistics, 150. https://aclanthology.org/J77-1006
45. Shneiderman, B. (2020). Bridging the Gap Between Ethics and Practice. ACM Transactions on Interactive Intelligent
Systems, 10(4), 131. https://dl.acm.org/doi/abs/10.1145/3419764
46. Swartout, W. R. (1981). Explaining and Justifying Expert Consulting Programs. Computers and Medicine, 254271.
https://doi.org/10.1007/978-1-4612-5108-8_15
47. Tursunalieva A., David, Dunne, R., Li, J., Riera, L., & Zhao, Y. (2024). Making Sense of Machine Learning: A Review
of Interpretation Techniques and Their Applications. Applied Sciences, 14(2), 496496.
https://doi.org/10.3390/app14020496
48. Vivian W.-M. Lai, Liu, H., & Tan, C. (2020). “Why is ‘Chicago’ deceptive?” Towards Building Model-Driven Tutorials
for Humans. Proc. ACM Hum. - Comput. Interact. 7, CSCW2, Article 357. https://doi.org/10.1145/3313831.3376873
49. Wamba-Taguimdje, S.-L., Fosso Wamba, S., Kala Kamdjoug, J. R., & Tchatchouang Wanko, C. E. (2020). Influence of
artificial intelligence (AI) on firm performance: the business value of AI-based transformation projects. Business Process
Management Journal, 26(7), 18931924.
50. Wang, W., Jones, P., & Partridge, D. (2000). Assessing the Impact of Input Features in a Feedforward Neural Network.
Neural Computing & Applications, 9(2), 101112. https://doi.org/10.1007/pl00009895
51. Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019). Explainable AI: A Brief Survey on History, Research
Areas, Approaches and Challenges. Natural Language Processing and Chinese Computing, 11839, 563574.
https://doi.org/10.1007/978-3-030-32236-6_51
52. Xu, Y., Wang, Q., An, Z., Wang, F., Zhang, L., Wu, Y., Dong, F., Qiu, C.-W., Liu, X., Qiu, J., Hua, K., Su, W., Xu, H.,
Han, Y., Cao, X., Liu, E., Fu, C., Yin, Z., Liu, M., & Roepman, R. (2021). Artificial Intelligence: A Powerful Paradigm
for Scientific Research. The Innovation, 2(4), 100179. Sciencedirect.
53. Zhao, J., & Gómez Fariñas, B. (2022). Artificial Intelligence and Sustainable Decisions. European Business
Organization Law Review, 24(1). https://doi.org/10.1007/s40804-022-00262-2
54. Zhou, F., Ayoub, J., Xu, Q., & Jessie Yang, X. (2019). A Machine Learning Approach to Customer Needs Analysis for
Product Ecosystems. Journal of Mechanical Design, 142(1). https://doi.org/10.1115/1.4044435
55. Zhu, X., Ao, X., Qin, Z., Chang, Y., Liu, Y., He, Q., & Li, J. (2021). Intelligent Financial Fraud Detection Practices in
Post-Pandemic Era: A Survey. The Innovation, 2(4), 100176. https://doi.org/10.1016/j.xinn.2021.100176
INTERNATIONAL JOURNAL OF LATEST TECHNOLOGY IN ENGINEERING,
MANAGEMENT & APPLIED SCIENCE (IJLTEMAS)
ISSN 2278-2540 | DOI: 10.51583/IJLTEMAS | Volume XIII, Issue V, May 2024
www.ijltemas.in Page 239
AUTHORS BIOGRAPHY
1. Chiamaka Okenwa is a software engineer with a track record in the fintech industry, specialising in product design and
development. She is an international speaker and community organiser for unStack Africa and JVM Nigeria, and is
enthusiastic about Java, women in technology, and developer communities.
2. Omoyin Damilola has a Masters in Business Administration with expertise in crafting cutting-edge software solutions
that prioritize security, performance, scalability, and maintainability, adhering to SOLID design principles and Agile
methodologies.
3. Adeyinka Orelaja has a Master's in Computer Science and a Bachelor's in Mathematics, and specializes in predictive
analytics, marketing technology solutions, machine learning, and data-driven decision-making.
4. Oladayo Tosin Akinwande obtained his B.Tech and M.Tech in Computer Science from Federal University of
Technology, Minna, Niger State, Nigeria. He is currently a PhD Student of Computer Science, Federal University of
Technology, Minna, Niger State, Nigeria. His current research interests include artificial intelligence, explainable
artificial intelligence, security and privacy issues in artificial intelligence and information and communication security.
He is a member of Nigeria Computer Society (NCS).