The growth of information is nowadays enormous and at a level which had never been reached before. We currently produce almost more data in one year than was produced in the entire history of mankind so far. In particular the trend to a full digitization of audiovisual content is contributing to this explosion of available material. The exponential growth of online video, most notably YouTube among the many prominent video portals is just one example for that. Even if international studies are not arriving at exactly the same results, the figures are impressive: digital production in 2006 was approximately 160 Exabyte, and is predicted to rise to 990 Exabyte in 2010.
Any video processing /editing software has to keep pace with these extraordinary data rates which requires special efforts from the hardware and the software. Fortunately we see also an extraordinary increase in processing power, especially when looking at recent developments of graphics cards (GPUs). These cards offer massive parallelism (ideally suited for video processing) at a rather modest price. All these facts make this hardware an ideal candidate for video processing. But in order to make full use of the hardware the algorithms have to be highly parallel. Typical tasks encountered in video processing (which will also be tackled by the proposed project are):
Superresolution: With the advent of HDTVs in many homes there is an increasing need to produce also HDTV content. In order to make use of existing (low-resolution) material one can use so called superresolution algorithms. These methods generate from a sequence of low resolution frames a high resolution image by exploiting the high interframe redundancy.
Denoising: There are many sources of noise in a video, either the material is historic or during production/compression etc. noise is added to the video. A basic task is to remove the noise but still preserve all fine scale details.
Interactive video editing: For post production purposes one wants to mark objects in a video (of course the object should only be marked in a single frame and then segmented automatically in all subsequent frames) and either remove them (which requires inpainting methods to fill the holes with meaningful content), place them somewhere else in the video or replace them with different objects. Since these tasks are done interactively this requires interactive framerates.
Fortunately all of these tasks can be addressed by so called variational methods. The basic idea is to formulate the task as a minimization problem of a suitable energy functional. Besides other desirable properties these methods can be implemented in a highly parallel fashion which makes them ideal candidates for implementation on modern GPUs.