Compression formats

Motion JPEG

Motion JPEG or M-JPEG is a digital video sequence that is made up of a series of individual JPEG images. (JPEG stands for Joint Photographic Experts Group.) When 16 image frames or more are shown per second, the viewer perceives motion video. Full motion video is perceived at 30 (NTSC) or 25 (PAL) frames per second.

One of the advantages of Motion JPEG is that each image in a video sequence can have the same guaranteed quality that is determined by the compression level chosen for the network camera or video encoder. The higher the compression level, the lower the file size and image quality. In some situations, such as in low light or when a scene becomes complex, the image file size may become quite large and use more bandwidth and storage space. To prevent an increase in the bandwidth and storage used, Axis network video products allow the user to set a maximum file size for an image frame.

 

Since there is no dependency between the frames in Motion JPEG, a Motion JPEG video is robust, meaning that if one frame is dropped during transmission, the rest of the video will not be affected.

 

Motion JPEG is an unlicensed standard. It has broad compatibility and is popular in applications where individual frames in a video sequence are required—for example, for analysis—and where lower frame rates, typically 5 frames per second or lower, are used. Motion JPEG may also be needed for applications that require integration with systems that support only Motion JPEG.

The main disadvantage of Motion JPEG is that it makes no use of any video compression techniques to reduce the data since it is a series of still, complete images. The result is that it has a relatively high bit rate or low compression ratio for the delivered quality compared with video compression standards such as MPEG-4 and H.264.

MPEG-4

When MPEG-4 is mentioned in video surveillance applications, it is usually referring to MPEG-4 Part 2, also known as MPEG-4 Visual. Like all MPEG (Moving Picture Experts Group) standards, it is a licensed standard, so users must pay a license fee per monitoring station. MPEG-4 supports low-bandwidth applications and applications that require high quality images, no limitations in frame rate and with virtually unlimited bandwidth.

 

H.264 or MPEG-4 Part 10/AVC

H.264, also known as MPEG-4 Part 10/AVC for Advanced Video Coding, is the latest MPEG standard for video encoding. H.264 is expected to become the video standard of choice in the coming years. This is because an H.264 encoder can, without compromising image quality, reduce the size of a digital video file by more than 80% compared with the Motion JPEG format and as much as 50% more than with the MPEG-4 standard. This means that much less network bandwidth and storage space are required for a video file. Or seen another way, much higher video quality can be achieved for a given bit rate.

 

H.264 was jointly defined by standardization organizations in the telecommunications (ITU-T’s Video Coding Experts Group) and IT industries (ISO/IEC Moving Picture Experts Group), and is expected to be more widely adopted than previous standards. In the video surveillance industry, H.264 will most likely find the quickest traction in applications where there are demands for high frame rates and high resolution, such as in the surveillance of highways, airports and casinos, where the use of 30/25 (NTSC/PAL) frames per second is the norm. This is where the economies of reduced bandwidth and storage needs will deliver the biggest savings.

 

H.264 is also expected to accelerate the adoption of megapixel cameras since the highly efficient compression technology can reduce the large file sizes and bit rates generated without compromising image quality. There are tradeoffs, however. While H.264 provides savings in network bandwidth and storage costs, it will require higher performance network cameras and monitoring stations.

 

Axis’ H.264 encoders use the baseline profile, which means that only I- and P-frames are used. This profile is ideal for network cameras and video encoders since low latency is achieved because B-frames are not used. Low latency is essential in video surveillance applications where live monitoring takes place, especially when PTZ cameras or PTZ dome cameras are used.

 

Next topic: Variable and constant bit rates