1. Microsoft Responsible AI Standard, v2. (Official Vendor Documentation). Microsoft defines "Fairness" as a key principle, stating, "AI systems should treat all people fairly." The standard emphasizes the need to "understand and control for different types of bias, such as allocation and quality-of-service harms, that can affect system performance and, consequently, peopleβs lives." This directly supports mitigating bias in data and models.
Source: Microsoft, "The Microsoft Responsible AI Standard, v2," General Tenets, Principle 2: Fairness, page 10.
2. GitHub, "Building with responsible AI." (Official Vendor Documentation). GitHub, as a Microsoft company, adheres to these principles. Their approach involves "measuring for potential harms like bias" and implementing "mitigations to make our models safer." This points to the proactive process of addressing bias during development.
Source: GitHub Blog, "How GitHub is building for responsible AI," Published November 8, 2023.
3. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? π¦" (Peer-Reviewed Academic Publication). This paper extensively discusses how large language models, the technology behind Copilot, learn and perpetuate biases from their vast training data. It argues that the source of unfairness is the biased data the models are trained on.
Source: FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 610β623. DOI: https://doi.org/10.1145/3442188.3445922