https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Datapipeline "The data pipeline
segments in depth. INPUT - In the input segment, Splunk software consumes data. It acquires the
raw data stream from its source, breaks it into 64K blocks, and annotates each block with some
metadata keys. The keys can also include values that are used internally, such as the character
encoding of the data stream, and values that control later processing of the data, such as the index
into which the events should be stored. PARSING Annotating individual events with metadata copied
from the source-wide keys. Transforming event data and metadata according to regex transform
rules."