1. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1–35. In Section 2.2, "Human Bias," the authors state, "Bias can arise from a human’s cognitive, personal, and institutional biases... This can influence the data generation process, the algorithm design, and the evaluation process." This identifies the human element as a core source of bias that permeates the entire system. (DOI: https://doi.org/10.1145/3457607)
2. Suresh, H., & Guttag, J. V. (2021). A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO '21). This framework details how human choices and biases introduce harm at every stage, including data collection (leading to "tainted examples") and model development. It positions human-driven processes as the origin of these issues. (DOI: https://doi.org/10.1145/3465416.3483305)
3. Friedman, B., & Nissenbaum, H. (1996). Bias in Computer Systems. ACM Transactions on Information Systems, 14(3), 330–347. This foundational paper identifies three categories of bias, including "preexisting bias," which is rooted in social institutions, practices, and attitudes. Human developers, as members of society, carry and can embed these biases into systems. (DOI: https://doi.org/10.1145/230538.230561)