Increasing Camera Density Per Server By Using More Efficient HD Video Analytics
Video surveillance camera technology is consistently improving, with HD and 4K cameras being utilized around the world to ensure security teams have access to the clearest footage available. Recording high resolution video is desirable for richness of detail and accuracy but it comes with a high price in terms of system resource utilization. This tradeoff is magnified when analytics are added to the mix.
The higher the resolution, the more pixels you have in the frame, which triggers a corresponding increase in analytic computational overhead. This is the crux of the trade-off which limits traditional HD video analytic processing. A 720p frame has four times as many pixels as a 640×360 frame. A 1080p frame has nine times as many pixels as a 640×360 frame.
What does this mean in real-world terms? Consider a server with enough CPU resources to run video analytics at 640×360 on 36 cameras. With the same server, if you were to run video analytics at 720p resolution, you can only run nine cameras. At 1080p you can only run four cameras. That means the higher the video resolution, the farther away you can see, but at a significant cost in terms of the number of cameras you can use per server.
Senstar’s Adaptive Analytic Resolution technology lowers the required CPU resources by intelligently scaling the video frames as required to track near and far objects and people, thus increasing the number of video streams you can run on the same server while maintaining the same level of effectiveness.
Click here for additional information on how Senstar’s Adaptive Analytic Resolution technology not only makes high resolution video analytics possible but easy to implement.