Flatiron students Steven Zhou and Heidi Hansen explain how core images work on iOS and OS X to help developers process images efficiently without dealing with low level interactions with GPU or CPU.

2. What is a Digital Image?
• It’s a collection of pixels and each pixel represents a specific
color.
• Combining hundreds and thousands of pixels together forms a
digital image.
• It’s a 2-dimensional arrays of pixels: X and Y or Rows and
Columns.
• Usually digital image refers to raster image or bitmap image.
3. What is Core Image?
•It’s an image processing framework on both iOS and OS X. It was first
introduced in iOS 5 and previously was only available on OS X.
•Its designed to do “real-time” image processing at pixel level for both
still image and video.
•It operates on image data types from multiple sources. ex: Core
Graphics,Core Video, Image I/O
•It allows developers to process images easily and efficiently without
dealing with low level interactions with GPU or CPU.
•It provides more than 90 built-in image processing filters in iOS, Feature
detection capability, and support for automatic image enhancement.
4. Key Core Image Classes
• CIImage:
1. An object that represents an image.
2. You can create CIImage object with different inputs.
• CIFilter:
1. An object that represents an effect.
2. Has at least one input parameter.
3. Produces a CIImage object.
• CIContext:
1.An object that draws the result produced by CIFilter
using either CPU or GPU rendering path.