1. Information Commissioner's Office (ICO). (2021). Guidance on AI and data protection.
Reference: Part 2, 'Principle 1: Lawfulness, fairness and transparency', page 31. The guidance states, "The transparency principle requires you to be clear, open and honest with people from the start about who you are, and how and why you use their personal data... This can be challenging in the context of AI, as it can be difficult to understand and explain how an AI system works." This difficulty directly contributes to the potential for invisible processing.
2. European Data Protection Board (EDPB). (2018). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (WP251rev.01).
Reference: Section III.B.3, 'Information to be provided to the data subject', page 25. The guidelines emphasize that for automated decision-making, controllers must provide "meaningful information about the logic involved". The inherent complexity of some AI models makes providing this information challenging, potentially leaving the data subject unaware of the processing logic or even its occurrence.
3. Kaminski, M. E. (2019). The Right to Explanation, Explained. Berkeley Technology Law Journal, 34(1), 189-218.
Reference: Page 199. The paper discusses the opacity of machine learning systems, noting that "many modern machine learning models are not readily explainable even to their designers." This inherent opacity is a primary driver of 'invisible processing' from the end-user's perspective, as it becomes difficult to communicate what the system is doing in a transparent manner. (Available via university repositories and legal scholarship databases).