dc.contributor.author | Jin, Bongjun | en_US |
dc.contributor.author | Ihm, Insung | en_US |
dc.contributor.author | Chang, Byungjoon | en_US |
dc.contributor.author | Park, Chanmin | en_US |
dc.contributor.author | Lee, Wonjong | en_US |
dc.contributor.author | Jung, Seokyoon | en_US |
dc.contributor.editor | David Luebke and Philipp Slusallek | en_US |
dc.date.accessioned | 2013-10-29T15:48:18Z | |
dc.date.available | 2013-10-29T15:48:18Z | |
dc.date.issued | 2009 | en_US |
dc.identifier.isbn | 978-1-60558-603-8 | en_US |
dc.identifier.issn | 2079-8687 | en_US |
dc.identifier.uri | http://dx.doi.org/10.1145/1572769.1572788 | en_US |
dc.description.abstract | While supersampling is an essential element for high quality rendering, high sampling rates, routinely employed in offline rendering, are still considered quite burdensome for real-time ray tracing. In this paper, we propose a selective and adaptive supersampling technique aimed at the development of a real-time ray tracer on today s many-core processors. For efficient utilization of very precious computing time, this technique explores both image space and object space attributes, which can be easily gathered during the ray tracing computation, minimizing rendering artifacts by cleverly distributing ray samples to rendering elements according to priorities that are selectively set by a user. Our implementation on the current GPU demonstrates that the presented algorithm makes high sampling rates as effective as 9 to 16 samples per pixel more affordable than before for real-time ray tracing. | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | Selective and Adaptive Supersampling for Real-Time Ray Tracing | en_US |
dc.description.seriesinformation | High-Performance Graphics | en_US |