One embodiment of the present invention sets forth a technique for reducing the amount of memory required to store vertex data processed within a processing pipeline that includes a plurality of shading engines. The method includes determining a first active shading engine and a second active shading engine included within the processing pipeline, wherein the second active shading engine receives vertex data output by the first active shading engine. An output map is received and indicates one or more attributes that are included in the vertex data and output by the first active shading engine. An input map is received and indicates one or more attributes that are included in the vertex data and received by the second active shading engine from the first active shading engine. Then, a buffer map is generated based on the input map, the output map, and a pre-defined set of rules that includes rule data associated with both the first shading engine and the second shading engine, wherein the buffer map indicates one or more attributes that are included in the vertex data and stored in a memory that is accessible by both the first active shading engine and the second active shading engine.
Jesse David Hall - Santa Clara CA, US Patrick R. Brown - Wake Forest NC, US Gernot Schaufler - Mountain View CA, US Mark D. Stadler - Los Altos CA, US
International Classification:
G06T 15/80
US Classification:
345522
Abstract:
One embodiment of the present invention sets forth a technique for configuring a graphics processing pipeline (GPP) to process data according to one or more shader programs. The method includes receiving a plurality of pointers, where each pointer references a different shader program header (SPH) included in a plurality of SPHs, and each SPH is associated with a different shader program that executes within the GPP. For each SPH included in the plurality of SPHs, one or more GPP configuration parameters included in the SPH are identified, and the GPP is adjusted based on the one or more GPP configuration parameters.
Lacky V. SHAH - Los Altos Hills CA, US Gregory Scott Palmer - Cedar Park TX, US Gernot Schaufler - Mountain View CA, US Samuel H. Duncan - Arlington MA, US Philip Browning Johnson - Campbell CA, US Shirish Gadre - Fremont CA, US Robert Ohannessian - Austin TX, US Nicholas Wang - Saratoga CA, US Christopher Lamb - San Jose CA, US Philip Alexander Cuadra - Mountain View CA, US Timothy John Purcell - Provo UT, US
International Classification:
G06F 9/38
US Classification:
712234, 712E09062
Abstract:
One embodiment of the present invention sets forth a technique instruction level and compute thread array granularity execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. When preemption is performed at a compute thread array boundary, the amount of context state to be stored is reduced because execution units within the processing pipeline complete execution of in-flight instructions and become idle. If, the amount of time needed to complete execution of the in-flight instructions exceeds a threshold, then the preemption may dynamically change to be performed at the instruction level instead of at compute thread array granularity.
Lacky V. SHAH - Los Altos Hills CA, US Gregory Scott Palmer - Cedar Park TX, US Gernot Schaufler - Mountain View CA, US Samuel H. Duncan - Arlington MA, US Philip Browning Johnson - Campbell CA, US Shirish Gadre - Fremont CA, US Timothy John Purcell - Provo UT, US
International Classification:
G06F 9/38
US Classification:
712228, 712E09062
Abstract:
One embodiment of the present invention sets forth a technique instruction level and compute thread array granularity execution preemption. Preempting at the instruction level does not require any draining of the processing pipeline. No new instructions are issued and the context state is unloaded from the processing pipeline. When preemption is performed at a compute thread array boundary, the amount of context state to be stored is reduced because execution units within the processing pipeline complete execution of in-flight instructions and become idle. If, the amount of time needed to complete execution of the in-flight instructions exceeds a threshold, then the preemption may dynamically change to be performed at the instruction level instead of at compute thread array granularity.
Youtube
Gernot Schweizer - Strkung des gesamten Sttza...
Abgeschwchte Liegesttze, eignet sich fr den gesamten Sttzapparat, strk...
Duration:
35s
Opus & The Schick Sisters - "Flyin' High" liv...
From the concert OPUS & Friends "Tonight At The Opera", a charity for ...
Duration:
5m 2s
Gernot Reinstadler's fatal crash - FUS RO DAH!
Duration:
14s
"Un/true (UA)" - Trailer am Schauspiel Stuttg...
Ein Videowalk von Gernot Grnewald & Thomas Taube "Wissenschaftler soll...
Duration:
1m 22s
2011-08-03-M2U00...
Rehbock auf den Sprengfiep gesprungen (Demmel Blatter), leider ohne St...
Duration:
10s
OPUS - "Opusphere" live Oper Graz 16.12.2019
From the concert OPUS & Friends "Tonight At The Opera", a charity for ...
Duration:
4m 6s
OPUS - "Eleven" live Oper Graz 16.12.2019
From the concert OPUS & Friends "Tonight At The Opera", a charity for ...