Abstract
Video processing is a resource demanding task. While todays high–end machines are able to process and encode video at reasonable speeds, they are usually not capable of doing this in real–time. In this thesis, we investigate and implement a distributed version of the real–time panorama video processing pipeline from the Bagadus project. The pipeline consists of several processing steps. Images are first captured from five individual cameras, and then grouped into sets. These sets are converted from the Bayer image format to YUV444 format, before they are stitched into a large panorama image. The stitched panorama video is encoded into H.264 format and stored on disk. It is also possible to enable HDR mode in the pipeline, which creates a video with more details visible in shadows and light areas. We initially created a simple distribution setup, allowing individual processing steps to be run on separate machines. To improve the performance of this setup, we have implemented a more advanced setup. The improved setup removed bottlenecks and adds support for Nvidia GPUDirect, for minimal latency GPU–to–GPU memory copies between machines. This enables a large number of setups, with minimal delay added by the distribution.