LEGAL MECHANISMS FOR RISK MANAGEMENT WHEN USING ARTIFICIAL INTELLIGENCE IN THE BANK SCORING SYSTEM

Authors

  • Amirjon Mardonov

DOI:

https://doi.org/10.47390/SPR1342V5SI4Y2025N31

Keywords:

artificial intelligence, credit scoring, algorithmic risk, discrimination, transparency, regulation and personal data.

Abstract

The article examines legal mechanisms for risk management when implementing artificial intelligence (AI) systems in banking scoring. International standards and approaches are analyzed - GDPR, the draft EU AI Act, ISO/IEC 23894:2023 - as well as the legislation of the Republic of Uzbekistan (Law on Personal Data, AI Development Strategy until 2030, etc.). Based on a comparative analysis and case study (Apple Card in the USA, SCHUFA in Germany, Asia Alliance Bank in Uzbekistan), the key risks of using AI in credit scoring are identified: discrimination, lack of transparency of algorithms, violation of consumer rights and inadequacy of models. Existing legal measures to minimize these risks are considered, such as requirements to prevent algorithmic discrimination, ensuring transparency and explainability of decisions, protecting personal data and the rights of consumers to appeal automated decisions. The discussion section offers recommendations for improving the regulation of AI use in banking lending in Uzbekistan, taking into account international experience - including the development of special regulations on high-risk AI systems, mandatory risk assessments and algorithm audits, and increased oversight of compliance with the principles of fairness and transparency. The implementation of these measures will reduce the likelihood of algorithmic errors and abuses, increase the level of trust in AI systems in the financial sector, and ensure a balance between innovation and the protection of citizens' rights.

References

1. Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260.

2. Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89, 1-33.

3. Hanson, M., Cook, S., & Vaidhyanathan, S. (2019). The Apple Card Didn't 'See' Gender—and That's the Problem. Wired.

4. Bode, M., & Helberger, N. (2020). The GDPR and algorithmic decision-making – Safeguarding individual rights but forgetting society. Journal of Consumer Policy, 43, 525-542.

5. Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.

6. Kaminski, M. E., & Malgieri, G. (2021). Multi-layered explanations from algorithmic impact assessments in the GDPR. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.

7. Felländer, A., Siri, S., & Teigland, R. (2018). The three phases of regulatory development for AI: A proposed model for balancing innovation and risk. Scandinavian Journal of Risk and Insurance, 34(2), 76-95.

8. Zweigert, K., & Kötz, H. (1998). Introduction to comparative law. Oxford University Press.

9. Article 29 Working Party. (2018). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679.

10. Malgieri, G. (2023). The CJEU's SCHUFA Decision: Automated Credit Scoring Under Art. 22 GDPR. European Data Protection Law Review, 9(3), 386-395.

11. Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97-112.

12. Dignum, V. (2023). A comprehensive approach to AI risk management. Nature Machine Intelligence, 5, 706-714.

13. Бегматов, А. С. (2020). Правовые аспекты защиты персональных данных в Республике Узбекистан. Вестник ТГЮУ, 4, 56-67.

14. Министерство цифровых технологий Республики Узбекистан. Стратегия развития искусственного интеллекта URL: https://gov.uz/ru/digital/pages/about

15. Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2022). Predictably unequal? The effects of machine learning on credit markets. The Journal of Finance, 77(1), 5-47.

16. Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 34, 265-288.

17. Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18.

18. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.

19. Federal Trade Commission. (2018). Fair Credit Reporting Act provisions and requirements relating to consumer notifications.

20. Ramsay, I. (2016). Consumer law and policy: Text and materials on regulating consumer markets. Bloomsbury Publishing.

21. Hand, D. J., & Henley, W. E. (1997). Statistical classification methods in consumer credit scoring: a review. Journal of the Royal Statistical Society: Series A (Statistics in Society), 160(3), 523-541.

22. Thomas, L. C. (2009). Consumer credit models: Pricing, profit and portfolios. Oxford University Press, 228-236.

23. Waldman, A. E. (2020). Power, process, and automated decision-making. Fordham Law Review, 88(2), 613-648.

Submitted

2025-05-21

Published

2025-05-27

How to Cite

Mardonov, A. (2025). LEGAL MECHANISMS FOR RISK MANAGEMENT WHEN USING ARTIFICIAL INTELLIGENCE IN THE BANK SCORING SYSTEM. Ижтимоий-гуманитар фанларнинг долзарб муаммолари Актуальные проблемы социально-гуманитарных наук Actual Problems of Humanities and Social Sciences., 5(S/4), 192–202. https://doi.org/10.47390/SPR1342V5SI4Y2025N31