Artificial Intelligence Ethics and Fairness: A study to address bias and fairness issues in AI systems, and the ethical implications of AI applications
DOI:
https://doi.org/10.31305/rrijm2023.v03.n02.004Keywords:
AI Ethics, Fairness, Bias, Ethical Implications, AI Applications, Ethical Decision-makingAbstract
Artificial intelligence (AI) has made incredible strides in several fields, revolutionising business and everyday life. Thoughts regarding the moral ramifications and fairness of AI systems have grown in prominence along with its fast development. This article explores the crucial issues of AI fairness and ethics, concentrating on ways to detect and reduce prejudice in AI systems while also discussing larger ethical implications. The paper emphasises the possible repercussions of biased decision-making while highlighting the many forms and sources of bias that might develop in AI models. There are several methods and strategies to deal with bias in AI systems, including data pretreatment, algorithmic changes, and transparency measures. In an effort to balance justice and efficacy, the trade-offs between fairness goals and overall model performance are examined. The article also emphasises how crucial it is for AI systems to be transparent and understandable in order to foster accountability. For the purpose of establishing ethical AI development and deployment practises, regulatory issues and ethical decision-making frameworks are also investigated. This study emphasises the need of ongoing research and development of moral AI systems to guarantee a just and equitable future for AI applications via in-depth analysis and case studies.
References
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Baezconde-Garbanati, L., Grana, R. A., & Cruz, T. B. (2020). Artificial Intelligence Ethics in Radiology: An Opportunity for Inclusive Imaging. Journal of the American College of Radiology, 17(7), 873–877. https://doi.org/10.1016/j.jacr.2020.02.015
Barocas, S., & Selbst, A. D. (2016). Big Data's Disparate Impact. California Law Review, 104(3), 671-732. https://doi.org/10.15779/Z38BG31
Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., ... & Veale, M. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency (pp. 19-21). https://doi.org/10.1145/3231335.3231341
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Proceedings of the 30th Conference on Neural Information Processing Systems (NIPS) (pp. 4349-4357). https://proceedings.neurips.cc/paper/2016/hash/086b536f2d26726c759c7a7563db32c9-Abstract.html
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability, and Transparency, 77-91. https://doi.org/10.1145/3177317.3177353
Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora necessarily contain human biases. Science, 356(6334), 183-186. https://doi.org/10.1126/science.aal4230
Cheney-Lippold, J. (2018). We Are Data: Algorithms and the Making of Our Digital Selves. NYU Press.
Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153-163. https://doi.org/10.1089/big.2016.0047
Chouldechova, A. (2021). The Mythos of Model Interpretability. In Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 203-213). https://doi.org/10.1145/3287560.3287598
Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 797-806). https://doi.org/10.1145/3097983.3098095
Crawford, K., Dobbe, R., Dryer, T., Fried, G., Green, B., Kaziunas, E., ... & West, S. M. (2019). The AI Now report 2019. AI Now Institute, New York, NY. https://ainowinstitute.org/AI_Now_2019_Report.pdf
Datta, A., Tschantz, M. C., & Datta, A. (2015). Automated experiments on ad privacy settings: A tale of opacity, choice, and discrimination. In Proceedings on Privacy Enhancing Technologies, 2015(1), 92-112. https://doi.org/10.1515/popets-2015-0006
Diakopoulos, N. (2016). Accountability in Algorithmic Decision Making. Digital Journalism, 4(6), 703-718. https://doi.org/10.1080/21670811.2016.1178596
Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608. https://arxiv.org/abs/1702.08608
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214-226). https://doi.org/10.1145/2090236.2090255
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., & Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (pp. 214-226). https://doi.org/10.1145/2090236.2090255
European Commission. (2019). Ethics Guidelines for Trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.6ce0c5cd
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., ... & Tamburrini, G. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707. https://doi.org/10.1007/s11023-018-9482-5
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2020). Datasheets for Datasets. arXiv preprint arXiv:1803.09010. https://arxiv.org/abs/1803.09010
Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Diverse and Inclusive Voices in AI Ethics. AI Magazine, 39(4), 61-65. https://doi.org/10.1609/aimag.v39i4.6950
Google. (2018). AI at Google: Our principles. https://www.blog.google/technology/ai/ai-principles/
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM Computing Surveys (CSUR), 51(5), 1-42. https://doi.org/10.1145/3236009
Hagendorff, T. (2020). The Ethics of AI Ethics — An Evaluation of Guidelines. Minds and Machines, 30(1), 99-120. https://doi.org/10.1007/s11023-019-09517-8
Hajian, S., Bonchi, F., & Castillo, C. (2016). Algorithmic Bias: From Discrimination Discovery to Fairness-Aware Data Mining. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2125-2126. https://doi.org/10.1145/2939672.2945388
Hardt, M., Price, E., & Srebro, N. (2016). Equality of opportunity in supervised learning. In Advances in neural information processing systems (pp. 3315-3323). https://proceedings.neurips.cc/paper/2016/hash/0b5fe4c13aa79a57ef971b7a007bdbc9-Abstract.html
Heidari, H., Feller, A., & Hajian, S. (2019). A moral framework for understanding of fairness in machine learning. In Proceedings of the 2nd Conference on Fairness, Accountability, and Transparency (pp. 409-418). https://doi.org/10.1145/3287560.3287596
Hutson, M. (2020). Why is AI Explainability Important? Healthcare Analytics News. https://www.hcanews.com/news/why-is-ai-explainability-important
IEEE. (2016). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. https://standards.ieee.org/standard/7012-2020.html
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
Jobin, A., Ienca, M., & Vayena, E. (2020). Artificial Intelligence: The Global Landscape of Ethics Guidelines. In The Oxford Handbook of Ethics of AI (pp. 59-80). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.4
Kamiran, F., & Calders, T. (2012). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1-33. https://doi.org/10.1007/s10115-011-0463-8
Kusner, M. J., Loftus, J. R., Russell, C., & Silva, R. (2017). Counterfactual fairness. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4066-4076). https://proceedings.neurips.cc/paper/2017/hash/16e82bb7633e9f8914f7d40a82d26145-Abstract.html
Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2018). How We Analyzed the COMPAS Recidivism Algorithm. ProPublica. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
Lipton, Z. C. (2016). The Mythos of Model Interpretability. In Proceedings of the 2016 Conference on Artificial Intelligence and Statistics (AISTATS) (Vol. 51, pp. 1010-1018). https://proceedings.neurips.cc/paper/2016/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
Liu, B., Jiang, X., Liao, L., Zhao, T., Dey, A. K., Le, T. T., & Chen, C. (2020). Towards better understanding of gradient-based attribution methods for Deep Neural Networks. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). https://doi.org/10.1145/3313831.3376188
Liu, C., Gao, J., Chen, P., & Han, Y. (2019). Bias also matters: Bias correction for deep neural network based age estimation with anchor age groups. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 108-114). https://doi.org/10.1145/3287560.3287592
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (pp. 4765-4774). https://proceedings.neurips.cc/paper/2017/hash/8a20a8621978632d76c43dfd28b67767-Abstract.html
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys (CSUR), 52(4), 1-35. https://doi.org/10.1145/3287560
Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
Pleiss, G., Raghavan, M., Wu, F., Kleinberg, J., & Weinberger, K. Q. (2017). On Fairness and Calibration. In Proceedings of the 34th International Conference on Machine Learning (Vol. 70, pp. 1337-1345). https://doi.org/10.5555/3305890.3306030
Raghavan, M., Kleinberg, J., Pardos, Z. A., & Wallach, H. (2018). Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 113-119). https://doi.org/10.1145/3278721.3278779
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why Should I Trust You?" Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). https://doi.org/10.1145/2939672.2939778
Romeo Casabona, C. M. (2018). Human Rights and Ethical Considerations in the Application of Artificial Intelligence to Health Care. In Artificial Intelligence in Health Care (pp. 73-84). Springer, Cham. https://doi.org/10.1007/978-3-319-94857-8_7
Rudin, C., Wang, C., Coker, B., & Doshi-Velez, F. (2019). The age of secrecy and unfairness in recidivism prediction. Harvard Data Science Review, 1(2). https://doi.org/10.1162/99608f92.4a4cf2b8
Ruggieri, S. (2014). Discrimination-aware data mining. ACM SIGKDD Explorations Newsletter, 15(1), 1-10. https://doi.org/10.1145/2722877.2722883
Sweeney, L. (2013). Discrimination in online ad delivery. In Privacy, Security, Risk and Trust (PASSAT), 2013 International Conference on and 2013 International Conference on Social Computing (SocialCom) (pp. 248-255). IEEE. https://doi.org/10.1109/PASSAT/SocialCom.2013.113
Zafar, M. B., Valera, I., Gomez Rodriguez, M., & Gummadi, K. P. (2017). Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web (pp. 1171-1180). https://doi.org/10.1145/3038912.3052660
Zemel, R., Wu, Y., Swersky, K., Pitassi, T., & Dwork, C. (2013). Learning Fair Representations. In Proceedings of the 30th International Conference on Machine Learning (Vol. 28, pp. 325-333). https://doi.org/10.5555/3042817.3043053