1. GitHub Docs
"Responsible AI for GitHub Copilot": This document outlines the safety mechanisms for Copilot. In the section "How does GitHub address risks with GitHub Copilot?"
it states
"We have implemented filters
for example
to block offensive words... We are also working on a toxicity filter to detect and alert us to toxic language...".
2. Microsoft
"Responsible AI Transparency for GitHub Copilot": This resource details the system's capabilities and limitations. Under the section "Mitigating Harmful Content
" it specifies the types of content filtered: "We have developed a robust set of filters that are constantly evolving to block harmful suggestions
including... hate speech
discrimination
or violence; sexually explicit or violent material...". This directly supports that options A and B are the types of content flagged.