1. Confluent Documentation, Apache Kafka > Operations > Running Kafka in Production: In the "File Descriptors and mmap" section, it states, "A Kafka broker uses a large number of open file handles... If you have a large number of partitions in a broker, the file descriptor limit will be hit. We recommend at least 100,000 allowed open file descriptors for the Kafka process." This directly links the number of partitions to the file descriptor limit.
2. Confluent Documentation, Apache Kafka > Design Details > Persistence: This section explains the on-disk format: "The data log for a partition is split into a set of segment files... The broker keeps an index that maps from an offset to a position in the segment file." This architectural detail clarifies why each partition requires multiple files, thus consuming file handles.
3. Kreps, J., Narkhede, N., & Rao, J. (2011). Kafka: A Distributed Messaging System for Log Processing. Proceedings of the NetDB, 1-7. While not a Confluent document, this foundational academic paper on Kafka's design describes the log-structured storage where each partition is a "log," which is implemented as a set of files on disk, reinforcing the link between partitions and file system resources. (Section 2.2, "Persistence and Caching").