Image processing is a technique that uses a computer to analyze images to achieve the desired results, also known as image processing. It is to operate images to get the results you want. It is a very broad concept, including image enhancement, image restoration, image reconstruction, image analysis, pattern recognition, computer vision and other N multiple application directions.
1. Image processing is a technique that uses a computer to
analyze images to achieve the desired results, also known
as image processing. It is to operate images to get the
results you want. It is a very broad concept, including
image enhancement, image restoration, image reconstruction,
image analysis, pattern recognition, computer vision and
other N multiple application directions. Many of these
application technologies are interlinked in nature, but the
concerns of different application fields are often
different.
Advantages of FPGA for image processing
One of the most critical advantages of using FPGA for image
processing is that FPGA can perform real-time pipeline
operations and achieve the highest real-time performance.
Therefore, in some application fields that require very
high real-time performance, FPGAs can only be used for
image processing. For example, in some sorting equipment,
the image processing basically uses FPGA, because the delay
between seeing the material image and giving the execution
instruction in the camera is only a few milliseconds, which
requires the image processing to be fast And the delay is
2. fixed, and only real-time pipeline operations performed by
FPGA can meet this requirement.
To understand the advantages of FPGA for image processing,
you must understand the difference between the real-time
pipeline operations that FPGA can perform and the image
processing operations performed by DSP, GPU, etc.:
DSP, GPU, and CPU processing images are basically based on
frames. The image data collected from the camera will first
be stored in the memory, and then the GPU will read the
image data in the memory for processing. If the frame rate
of the captured image is 30 frames, then if the DSP and GPU
3. can complete the processing of a frame of image within 1/30
of a second, it can basically be regarded as real-time
processing.
The FPGA performs real-time pipeline operations on images
in units of lines. FPGA can be directly connected with the
image sensor chip to collect the image data stream. If it
is in RAW format, it can also obtain RGB image data by
difference. The key to FPGA's real-time pipeline processing
is that it can cache several lines of image data with its
internal Block Ram. Block Ram can be said to be similar to
the Cache in the CPU, but Cache is not completely
controllable by you, and Block Ram is completely
controllable and can be used to achieve various flexible
calculations. In this way, the FPGA can process the image
in real time by caching several lines of image data. The
data is processed as it flows through, and it does not need
to be sent to the DDR buffer before being read for
processing.
The road to FPGA image processing starts here
When using FPGA for image processing-related development,
the first thing we must consider is the performance of the
4. FPGA processing board, because image processing is a very
resource-consuming thing. Many image processing FPGA
development boards can be searched on the Internet, and
some development boards have abundant resources, which can
meet the needs of our preliminary experiments.
FPGA is mainly used in image preprocessing stage in image
processing.
What is image preprocessing? Such as image distortion
correction, filter processing, edge detection, color
detection and threshold processing. These preprocessing
have common characteristics: the algorithm is relatively
simple, the operation is repetitive and so on. However,
apart from preprocessing, can't FPGA do anything else?
Image processing is similar to a three-layer pyramid,
divided into bottom, middle, and high levels.
The image processing pyramid has three levels, which are
aimed at pixel level, feature level and target level
respectively. A mature image processing application should
complete these three layers at the same time.
5. In the pixel layer, we can do some transformations on the
image, the purpose is to enhance the useful information of
the image, while filtering any irrelevant information (such
as noise). Then, the preprocessed image is segmented to
achieve the transition from the pixel level to the feature
level. The segmentation operation can be understood as
detecting areas with common properties in the image. For
these areas, according to one or more classification rules,
the areas are classified into preset feature types as data
sets for later recognition. At this time, the data is not
just an image, it contains rich feature information, such
as the location of the object. At the top of the pyramid,
relying on the acquired feature information, if necessary,
these feature sets can be used as learning training sets to
create a special model, and the model can be used to
realize recognition, and then used to describe the images
collected in real time.
Points for Attention in the Design of Image Processing
System
1. Separate algorithm development and FPGA implementation.
The software image processing environment algorithm can be
6. used to test and debug a large number of image samples, and
then map the algorithm to the hardware, which greatly saves
the hardware debugging cycle.
2. The accuracy of the algorithm. Most of the image
processing algorithms need to use floating point
calculations, and floating point calculations are very
uneconomical in FPGAs, so they need to be converted into
fixed point calculations. At this time, the accuracy of
floating point calculations will be reduced when fixed
point calculations are changed. problem.
3. Reasonable division of structure. This refers to DSP,
CPU and FPGA; general structural rules: operations with a
large amount of calculation such as sobel operator and mean
filtering can be performed by FPGA, and the underlying
algorithm of irregular dynamic variable-length loops is
performed by DSP and CPU;
Basic method of image processing FPGA design
1. Array structure combined with pipeline processing
design
7. For example, an RGB image includes three sets of data.
After three parallel channels are required for processing,
each channel undergoes separate serial pipeline processing.
2. Cache design
Frame buffer, row buffer, column alignment.
3. Resources
The resolution, processing window, and impact on resources
are multiplied.