This document discusses optimizing data ingestion into Hadoop for high volume event streaming. It describes how Hadoop was not designed for small, high volume event data and outlines several techniques to improve ingest performance: buffering events to reduce mapper wakeups, compressing events into larger blocks for network transfer, and analyzing event data during ingestion to store metadata that improves later data access and analysis. The key is developing "mechanical sympathy" and balancing resource usage to fully utilize available hardware and remove bottlenecks.