In this paper, we introduce the idea of positive-sum equity and argue that greater inequalities are not necessarily harmful, as long as they do not come at the expense of the performance of a specific subgroup. Overall performance, normative justice, and positive justice were analyzed for four models, each of which made use of sensitive features in a different way.
Our study highlights the need for a nuanced understanding of justice measures and their implications in real-world applications. Good integration of medical knowledge is critical when using sensitive information and accurately assessing fairness, especially in cases where models may show significant variation in performance.
Where traditional approaches often aim to achieve equality, positive-sum equity focuses on fairness, pushing each group to achieve the highest possible level of performance. This can lead to better overall outcomes, because it encourages meeting the specific needs and challenges of each group without diminishing the quality of care provided to others. But, because it is defined as an optimization problem, it may also have unintended side effects because it may inadvertently prioritize larger or more well-represented groups by focusing efforts on groups with a higher impact on overall performance rather than groups with a more significant impact. . needs. Therefore, it is worth noting that meeting the positive-sum fairness criterion alone does not guarantee that the model is fair from an equality perspective, and using this idea in conjunction with other measures can give a more comprehensive understanding of model fairness.
Since positive-sum fairness is a relative measure, it requires the use of a baseline. Additional work in this area will include developing a more robust baseline or adapting the approach to eliminate the need for a baseline. It would also be useful to compare models tested out of range, and include other sensitive characteristics such as gender and age, while accounting for confounding factors.
Disclosure of interests. The authors declare that there are no conflicts of interest with respect to the publication of this paper.
Baumann, J., Hertweck, C., Loi, M., Heitz, C.: Distributive justice as the foundational premise of fair justice: unifying, extending, and interpreting measures of group justice (2023), https://arxiv. org/abs/2206.02897
Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: state-of-the-art developments (2017)
Brown, A., Tomasev, N., Freyberg, J., Liu, Y., Karthikesalingam, A., Schrouff, J.: Abbreviation learning detection for fair medical artificial intelligence using abbreviation testing
Burton, D.C., Flannery, B., Bennett, N.M., Farley, M.M., Gershman, K., Harrison, L.H., Linfield, R., Beatty, S., Ringgold, A.L., Schaffner, W., Thomas, A. ., Plikaytis, BD, Rose, Jr, CE, Whitney, CG, Schuchat, A., for the Active Bacterial Core Surveillance/Emerging Infections Program Network: Socioeconomic Disparities and race/ethnicity in the incidence of bacterial pneumonia among adults in the United States. I am. Y. Public Health 100(10), 1904-1911 (October 2010)
Diana, E., Gill, W., Kearns, M., Kenthapadi, K., Roth, A.: Minimax Set Fairness: Algorithms and Experiments (2021)
EAM, S., M, W., P, M., N.D., F.: Performance-related fairness and interpretability effects in deep learning models for brain image analysis. Photography by J Mead (Bellingham). 2022 Nov;9(6):061102. doi: 10. (2022)
Efron, B.: Better bootstrap confidence intervals. Journal of the American Statistical Association 82(397), 171-185 (1987)
Feldman, M., Friedler, S., Moeller, J., Scheidegger, C., Venkatasubramanian, S.: Validating and eliminating disparate influences (2015)
Gichoya, J. W., Banerjee, I., Bhimireddy, A. R., Burns, J. L., Celi, L. A., Chen, L. C., Correa, R., Dullerud, N., Ghassemi, M., Huang, S. C., Kuo, P. C., Lungren, M. P., Palmer, L. J., Price, B. J., Purkayastha, S., Pyrros, A. T., Oakden-Rayner, L., Okechukwu, C., Seyyed-Kalantari, L., Trivedi, H., Wang, R., Zaiman, Z., Zhang, H.: Artificial intelligence recognition of patient race in medical imaging: a modeling study. Lancet number. Health 4(6), e406–e414 (June 2022)
Glocker, B., Jones, C., Bernhardt, M., Winzeck, S.: Algorithmic coding of protected features in chest x-ray disease detection models. EBioMedicine 89(104467), 104467 (March 2023)
Haeri, MA, Zweig, KA: The critical role of sensitive features in fair classification. In: 2020 IEEE Symposium Series on Computational Intelligence (SSCI). pp. 2993-3002 (2020). https://doi.org/10.1109/SSCI47803.2020.9308585
Hardt, M., Price, E., Srebro, N.: Equal opportunities in supervised learning (2016)
Huang, G., Liu, Z., van der Maaten, L., Weinberger, K. Q.: Densely connected convolutional networks (2018)
Johnson, A., Bulgarelli, L., Pollard, T., Hornig, S., Seeley, L.A., Mark, R.: MIMIC-IV (2023)
Johnson, AEW, Bulgarelli, L., Shen, L., Giles, A., Shamout, A., Hornig, S., Pollard, T. J., Howe, S., Moody, P., Gao, B., Lehmann. , LWH, Celi, LA, Mark, RG: MIMIC-IV, a freely accessible electronic health record dataset. Science fiction. Data 10(1), 1 (January 2023)
Johnson, AEW, Pollard, T.J., Greenbaum, N.R., Lungren, M.B., Ying-Ding, C., Peng, Y., Lu, Z., Mark, R.J., Berkowitz, S.J., Hornig, S.: Mimic- cxr-jpg, a large publicly available database of chest radiographs (2019)
Joseph, N.P., Reid, N.J., Sum, A., Li, M.D., Hyle, E.P., Dugdale, C.M., Lang, M., Betancourt, J.R., Deng, F., Mendoza, D.P., Little, B.P., Narayan, A.K. , Flores, E.J.: Racial and ethnic disparities in disease severity on admission chest radiographs among patients admitted with confirmed coronavirus disease 2019: a retrospective cohort study. Radiology 297(3), E303–E312 (December 2020)
Kleinberg, J., Mullainathan S, Raghavan M: The trade-offs inherent in equitable risk scoring (2016)
Lara, R., A., M., Echeveste, R., Ferrante, E.: Addressing fairness in artificial intelligence for medical imaging. Nat Common 13 4581 (2022)
Lee, J., Brooks, C., Yu, R., Kizilcec, R.: Justice Center Technical Briefs: Auc Gap (2023)
Lee, J. K., Bu, Y., Rajan, D., Sattigeri, P., Panda, R., Das, S., Wornell, G. W.: Fair selective classification via sufficiency. In: International Conference on Machine Learning (2021), https://api.semanticscholar.org/CorpusID:235826429
Loshchilov, I., Hutter, F.: Sgdr: Stochastic Gradient Descent with Warm Restarts (2017)
I. Loshchilov, Hutter, F.: Regulating discrete weight decay (2019)
Mittelstadt, P., Wachter, S., Russell, C.: The injustice of fair machine learning: compromise and strict equality by default (2023), https://arxiv.org/abs/2302.02404
Mukherjee, D., Yurochkin, M., Banerjee, M., Sun, Y.: Two simple methods for learning individual justice measures from data (2020)
Petersen, E., Ferrante, E., Ganz, M., Feragen, A.: Are demographically stable models and representations in medical imaging fair? (2024), https://arxiv.org/abs/2305.01397
Petersen, E., Holm, S., Ganz, M., Feragen, A.: The path to equal performance in medical machine learning. Patterns 4(7), 100790 (July 2023). https://doi.org/10.1016/j. patter.2023.100790, http://dx.doi.org/10.1016/j.patter.2023.100790
Raff, E., Sylvester, J.: Gradual Reversal Against Discrimination (2018)
Rajeev, C., Natarajan, K.: Data augmentation in chest X-ray (CXR) image classification using DCGAN-CNN, pp. 91-110 (11, 2023). https://doi.org/10.1007/ 978-3-031-43205-7_6
Rubinstein, WS: Hereditary breast cancer in Jews. Fam. Cancer 3(3-4), 249-257 (2004)
Russakovsky, O., Ding, J., Su, H., Kraus, J., Satish, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: The Imagenet Large-Scale Visual Recognition Challenge (2015)
Seyyed-Kalantari, L., Zhang, H., McDermott, M., Chen, I., Ghassemi, M.: Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in underserved patient populations. Nature Medicine 27 (12/2021). https://doi.org/10.1038/s41591-021-01595-0
Shi, H., Seegobin, K., Heng, F., Zhou, K., Chen, R., Qin, H., Manochakian, R., Zhao, Y., Lou, Y.: The genomic landscape of lung adenocarcinomas In different genders. before. Oncol. 12 (September 2022)
Ustun, B., Liu, Y., Parkes, D.: Fairness without harm: discrete classifiers with preference guarantees. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, Vol. 97, pp. 6373-6382. PMLR (09-15 June 2019), https://proceedings.mlr.press/ v97/ustun19a.html
Varkey, B.: Principles of clinical ethics and their application to practice. Med. Prince. training. 30(1), 17-28 (2021)
Verma, S., Rubin, J.S.: Explaining definitions of justice. 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), pp. 1–7 (2018), https://api.semanticscholar. org/CorpusID:49561627
Warner, E., Folkes, W., Goodwin, P., Michino, W., Blundall, J., Patterson, C., Ozcelik, H., Goss, B., Allingham-Hawkins, D., Hamel, N., Di Prospero, L., Contega , V., Seruya, C., Klein, M., Moslehi, R., Hanniford, J., Liede, A., Glendon, G., Brunet, J.S., Narod, S.: Prevalence and penetrance of BRCA1 and BRCA2 gene mutations in unselected Ashkenazi Jewish women with breast cancer. J Natl. Cancer Institute. 91(14), 1241–1247 (July 1999)
Xu, Z., Li, J., Yao, Q., Li, H., Zhou, S.K.: Fairness in medical image analysis and healthcare: a literature survey (2023)
Yang, Y., Zhang, H., Gichoya, J. W., Katabi, D., Ghassemi, M.: Frontiers of equitable medical imaging in the wild (2023)
Zong, Y., Yang, Y., Hospedales, T.: Medfair: Measuring Fairness in Medical Imaging (2023)
Žliobaite, I., Custers, B.: Use of sensitive personal data may be necessary to avoid discrimination in data-driven decision models (2016)