With the thriving popularity of ultra high definition, high dynamic range, wide color gamut content, and the increasing user expectations that follow, the issue of banding is becoming more important to the streaming industry. Banding is an annoying visual artifact that frequently appears at various stages along the video distribution chain. Also known as false contours, it occurs when the granularity of color or intensity levels mismatches with visual perception of smooth color and luminance transitions. In these instances the discontinuities in smooth image gradients are perceived as wide and discrete bands.
Given the increased emphasis on video quality, the banding effect has been attracting a growing deal of attention for its strong negative impact on viewer experience in visual content that could otherwise have nearly perfect quality. Among the frustrations: increasing the bit-depth or bitrate of a video does not necessarily lead to removal or reduction of banding; plus, the visibility of banding could vary drastically across scenes and across viewing devices. In this presentation, we have a deep dive into two very different types of technologies that have demonstrated great promises on detecting and removing banding. The first type is knowledge-driven and is built upon computational models that account for the characteristics of the human visual system, the content production and distribution processes, the color representation methods, the OETF and EOTF transfer functions, the display devices, and the interplay between them. The second type of technology is data-driven, and is based on machine learning methods, for example, by training deep neural networks (DNNs) in an end-to-end manner from large-scale datasets.
In this presentation we will show the key ideas and technical details behind these approaches, discuss their pros and cons, and demonstrate using real examples.