While the last few years have seen great advancements in computing paradigms for big data stores, there remains one critical bottleneck in this architecture - the ingestion process. Instead of immediate insights into the data, a poor ingestion process can cause headaches and problems to no end. On the other hand, a well-designed ingestion infrastructure should give you real-time visibility into how your systems are functioning at any given time. This can significantly increase the overall effectiveness of your ad-campaigns, fraud-detection systems, preventive-maintenance systems, or other critical applications underpinning your business.
In this session we will explore various modes of ingest including pipelining, pub-sub, and micro-batching, and identify the use-cases where these can be applied. We will present this in the context of open source frameworks such as Apache Flume, Kafka, among others that can be used to build related solutions. We will also present when and how to use multiple modes and frameworks together to form hybrid solutions that can address non-trivial ingest requirements with little or no extra overhead. Through this discussion we will drill-down into details of configuration and sizing for these frameworks to ensure optimal operations and utilization for long-running deployments.