BANK SKORINGI TIZIMIDA SUN’IY INTELLEKTNI QO‘LLASHDA XAVFLARNI BOSHQARISHNING HUQUQIY MEXANIZMLARI
Kalit so'zlar
https://doi.org/10.47390/SPR1342V5SI4Y2025N31Kalit so'zlar
sun’iy intellekt, kredit skoring, algoritmik xavf, diskriminatsiya, shaffoflik, tartibga solish va shaxsiy ma’lumotlar.Annotasiya
Maqolada bank kredit skoringida sun’iy intellekt (SI) tizimlarini joriy etishda xavflarni boshqarishning huquqiy mexanizmlari ko‘rib chiqilgan. Xalqaro standartlar va yondashuvlar — GDPR, Yevropa Ittifoqining SI to‘g‘risidagi reglamenti (EU AI Act) loyihasi, ISO/IEC 23894:2023 standarti hamda O‘zbekiston Respublikasining qonunchiligi («Shaxsiy ma’lumotlar to‘g‘risida»gi Qonun, 2030 yilgacha SIni rivojlantirish strategiyasi va boshqalar) tahlil qilingan. Solishtirma tahlil va keyslar (Apple Card — AQSH, SCHUFA — Germaniya, Asia Alliance Bank — O‘zbekiston) asosida SIni kredit skoringida qo‘llashning asosiy xavflari aniqlangan: diskriminatsiya, algoritmlarning shaffof emasligi, iste’molchilar huquqlarining buzilishi va modellar muvofiqsizligi. Mavjud huquqiy choralar tahlil qilingan, jumladan, algoritmik diskriminatsiyaga yo‘l qo‘ymaslik, qarorlarning shaffofligi va tushunarliligini ta’minlash, shaxsiy ma’lumotlarni himoya qilish va avtomatlashtirilgan qarorlarga shikoyat qilish huquqi. Muhokama bo‘limida xalqaro tajribani hisobga olgan holda O‘zbekistonda bank kreditlash sohasida SIdan foydalanishni tartibga solishni takomillashtirish bo‘yicha takliflar berilgan — yuqori xavfli SI tizimlari uchun maxsus normativ hujjatlar ishlab chiqish, xavflarni baholash va algoritmlarni auditdan o‘tkazish majburiyati, adolat va shaffoflik tamoyillariga rioya etilishini kuchaytirish. Ushbu choralar SI tizimlarida algoritmik xatolar va suiiste’molchilik xavfini kamaytirishga, moliya sohasida ishonchni oshirishga va fuqarolar huquqlarini himoya qilish bilan innovatsiyalar o‘rtasida muvozanat ta’minlashga xizmat qiladi.
Manbalar
1. Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260.
2. Citron, D. K., & Pasquale, F. (2014). The scored society: Due process for automated predictions. Washington Law Review, 89, 1-33.
3. Hanson, M., Cook, S., & Vaidhyanathan, S. (2019). The Apple Card Didn't 'See' Gender—and That's the Problem. Wired.
4. Bode, M., & Helberger, N. (2020). The GDPR and algorithmic decision-making – Safeguarding individual rights but forgetting society. Journal of Consumer Policy, 43, 525-542.
5. Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press.
6. Kaminski, M. E., & Malgieri, G. (2021). Multi-layered explanations from algorithmic impact assessments in the GDPR. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.
7. Felländer, A., Siri, S., & Teigland, R. (2018). The three phases of regulatory development for AI: A proposed model for balancing innovation and risk. Scandinavian Journal of Risk and Insurance, 34(2), 76-95.
8. Zweigert, K., & Kötz, H. (1998). Introduction to comparative law. Oxford University Press.
9. Article 29 Working Party. (2018). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679.
10. Malgieri, G. (2023). The CJEU's SCHUFA Decision: Automated Credit Scoring Under Art. 22 GDPR. European Data Protection Law Review, 9(3), 386-395.
11. Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97-112.
12. Dignum, V. (2023). A comprehensive approach to AI risk management. Nature Machine Intelligence, 5, 706-714.
13. Бегматов, А. С. (2020). Правовые аспекты защиты персональных данных в Республике Узбекистан. Вестник ТГЮУ, 4, 56-67.
14. Министерство цифровых технологий Республики Узбекистан. Стратегия развития искусственного интеллекта URL: https://gov.uz/ru/digital/pages/about
15. Fuster, A., Goldsmith-Pinkham, P., Ramadorai, T., & Walther, A. (2022). Predictably unequal? The effects of machine learning on credit markets. The Journal of Finance, 77(1), 5-47.
16. Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 34, 265-288.
17. Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for. Duke Law & Technology Review, 16, 18.
18. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.
19. Federal Trade Commission. (2018). Fair Credit Reporting Act provisions and requirements relating to consumer notifications.
20. Ramsay, I. (2016). Consumer law and policy: Text and materials on regulating consumer markets. Bloomsbury Publishing.
21. Hand, D. J., & Henley, W. E. (1997). Statistical classification methods in consumer credit scoring: a review. Journal of the Royal Statistical Society: Series A (Statistics in Society), 160(3), 523-541.
22. Thomas, L. C. (2009). Consumer credit models: Pricing, profit and portfolios. Oxford University Press, 228-236.
23. Waldman, A. E. (2020). Power, process, and automated decision-making. Fordham Law Review, 88(2), 613-648.