Machine learning approaches are being explored for video compression. Conservative approaches replace individual MPEG blocks with deep learning blocks, while disruptive end-to-end approaches aim to replace the entire MPEG chain. Optical flow networks can exploit temporal redundancy by estimating motion between frames. Fully neural network-based video compression models jointly learn motion estimation, motion compression, and residual compression in an end-to-end optimized framework. However, performance gains must be balanced against increased complexity, and neural network approaches are not yet mature enough to be included in video compression standards.