Speaker: Markus Ernst
The increasingly computational power and programmability of modern graphics hardware provides developers of real-time rendering applications with the resources needed to realize more and more complex graphical effects. Some of these effects, like Depth-of-Field, require an efficient image blurring technique to achieve real-time frame rates of 30 frames per second or above. This work presents a comparison of various blurring techniques in terms of their performance on modern graphics hardware. Whereas most of the chosen methods are exclusively used to blur an image, some of them are capable of applying an arbitrary filter. Furthermore, the quality of the different methods has been determined using an automatic process which utilizes a calibrated visual metric. Another aspect when using modern graphics hardware is the increasing scope of operations, especially in the domain of image processing, that can be carried out by using general-purpose computing on graphics processing units (GPGPU). In the recent years, utilizing GPGPU has become increasingly popular inside real-time rendering applications for special tasks like physics simulations. Therefore, all chosen algorithms have been implemented using shaders (GLSL) and GPGPU (CUDA), to answer the question whether or not the usage of a general purpose computing language is applicable for image blurring in real-time rendering and how it compares to using a shading language.