dc.contributor.author | Kerbl, Bernhard | en_US |
dc.contributor.author | Kenzel, Michael | en_US |
dc.contributor.author | Schmalstieg, Dieter | en_US |
dc.contributor.author | Seidel, Hans‐Peter | en_US |
dc.contributor.author | Steinberger, Markus | en_US |
dc.contributor.editor | Chen, Min and Zhang, Hao (Richard) | en_US |
dc.date.accessioned | 2018-01-10T07:42:52Z | |
dc.date.available | 2018-01-10T07:42:52Z | |
dc.date.issued | 2017 | |
dc.identifier.issn | 1467-8659 | |
dc.identifier.uri | http://dx.doi.org/10.1111/cgf.13075 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.1111/cgf13075 | |
dc.description.abstract | While the modern graphics processing unit (GPU) offers massive parallel compute power, the ability to influence the scheduling of these immense resources is severely limited. Therefore, the GPU is widely considered to be only suitable as an externally controlled co‐processor for homogeneous workloads which greatly restricts the potential applications of GPU computing. To address this issue, we present a new method to achieve fine‐grained priority scheduling on the GPU: hierarchical bucket queuing. By carefully distributing the workload among multiple queues and efficiently deciding which queue to draw work from next, we enable a variety of scheduling strategies. These strategies include fair‐scheduling, earliest‐deadline‐first scheduling and user‐defined dynamic priority scheduling. In a comparison with a sorting‐based approach, we reveal the advantages of hierarchical bucket queuing over previous work. Finally, we demonstrate the benefits of using priority scheduling in real‐world applications by example of path tracing and foveated micropolygon rendering.While the modern graphics processing unit (GPU) offers massive parallel compute power, the ability to influence the scheduling of these immense resources is severely limited. Therefore, the GPU is widely considered to be only suitable as an externally controlled co‐processor for homogeneous workloads which greatly restricts the potential applications of GPU computing. To address this issue, we present a new method to achieve fine‐grained priority scheduling on the GPU: hierarchical bucket queuing. By carefully distributing the workload among multiple queues and efficiently deciding which queue to draw work from next, we enable a variety of scheduling strategies. These strategies include fair‐scheduling, earliest‐deadline‐first scheduling and user‐defined dynamic priority scheduling. | en_US |
dc.publisher | © 2017 The Eurographics Association and John Wiley & Sons Ltd. | en_US |
dc.subject | GPU, queuing, priority scheduling | |
dc.subject | parallel computing | |
dc.subject | I.3.1 [Computer Graphics]: Hardware Architecture—Parallel processing | |
dc.title | Hierarchical Bucket Queuing for Fine‐Grained Priority Scheduling on the GPU | en_US |
dc.description.seriesinformation | Computer Graphics Forum | |
dc.description.sectionheaders | Articles | |
dc.description.volume | 36 | |
dc.description.number | 8 | |
dc.identifier.doi | 10.1111/cgf.13075 | |
dc.identifier.pages | 232-246 | |