16002/TUB: Data Coding Using Sparse Mixture of Experts Regression
The primary goal of the method described here is to allow the reconstruction of images in high quality by using a universal image coder with easy bit-level access to MPEG-7-like low- and mid-level image features in the decoder. Natural images are mostly piecewise smooth. Therefore, the idea is to search for unsteady regions in the image, and to approximate the stationary regions separately, but smoothly combining both of them with great care.
The technique presented here offers a universal, bit-efficient video compression approach with high MPEG-7-like features intrinsically embedded in the bitstream. A sparse Mixture-of-Experts (MoE) regression approach for encoding videos in the pixel domain is used, deferring drastically from the established DPCM / Transform coding philosophy. Here the MoE takes the form of a Gaussian Mixture Regression (GMR) for multivariate nonlinear regression. The underlying stochastic process of the pixel amplitudes are modelled as a 3-dimensional and multi modal mixture of Gaussian modes with K modes. Therefore, each component in the MoE steers in the direction of the highest correlation.
Experiments show that – compared to JPEG – for a large class of images a considerable compression gain can be achieved at low bitrates, while providing attractive low-level descriptors for the image. In this way the SMoE shows a strong resemblance to MoE neural networks, while providing a performance that can compete with H.264.
The method provides an almost universal bit-efficient video compressing approach that can be used in various coding applications.
- Universal Image/Video Coding
- Compression gain at low bitrates
- MPEG7-like features embedded
- Performance competitive to H.264
PCT - pending
Technische Universität Berlin
- R&D Cooperation
- Patent Purchase
Ansprechpartnerin / Contact Person: Ina Krüger
Tel.: 030 314-75916