Christopher J. Berry - Hudson NY, US Lawrence David Curley - Round Rock TX, US Patrick James Meaney - Poughkeepsie NY, US Diana Lynn Orf - Poughkeepsie NY, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G01R 31/28 G06F 17/50
US Classification:
714726, 714729, 716122, 716125
Abstract:
A method for optimizing scan chains in an integrated circuit that has multiple levels of hierarchy addresses unlimited chains and stumps and separately all other chains and stumps. Unlimited chains and stumps are optimized by dividing an area encompassed by the chains and by a start point and an end point of the stump into a grid comprised of a plurality of grid boxes, and determining a grid box to grid box connectivity route to access all of the grid boxes between the start point and the end point by means of a computer running a routing algorithm. All other chains and stumps are optimized randomly assigning to a stump a chain that can be physically reached by that stump and adding an additional chain to that stump based on the number of latches in the additional chain, its physical location, and the number of latches already assigned.
Method, System And Computer Program Product For Enhanced Shared Store Buffer Management Scheme With Limited Resources For Optimized Performance
Gary E. Strait - Poughkeepsie NY, US Mark A. Check - Hopewell Junction NY, US Hong Deng - Poughkeepsie NY, US Diana L. Orf - Hyde Park NY, US Hanno Ulrich - Boeblingen, DE
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 13/00
US Classification:
710 52, 710 53, 710 54, 710 57
Abstract:
The exemplary embodiment of the present invention provides a storage buffer management scheme for I/O store buffers. Specifically, the storage buffer management system as described within the exemplary embodiment of the present invention is configured to comprise storage buffers that have the capability to efficiently support 128 byte or 256 byte I/O data transmission lines. The presently implemented storage buffer management scheme enables for a limited number of store buffers to be associated with a fixed number of storage state machines (i. e. , queue positions) and thereafter the allowing for the matched pairs to be allocated in order to achieve maximum store throughput for varying combinations of store sizes of 128 and 256 bytes.
Ekaterina M. Ambroladze - Wappingers Falls NY, US Deanna Postles Dunn Berger - Poughkeepsie NY, US Michael Fee - Cold Spring NY, US Diana Lynn Orf - Somerville MA, US
Assignee:
International Business Machines Corporation - Armonk NY
A method of managing a temporary memory includes: receiving a request to transfer data from a source location to a destination location, the data transfer request associated with an operation to be performed, the operation selected from an input into an intermediate temporary memory and an output; checking a two-state indicator associated with the temporary memory, the two-state indicator having a first state indicating that an immediately preceding operation on the temporary memory was an input to the temporary memory and a second state indicating that the immediately preceding operation was an output from the temporary memory; and performing the operation responsive to one of: the operation being an input operation and the two-state indicator being in the second state, indicating that the immediately preceding operation was an output; and the operation being an output operation and the two-state indicator being in the first state, indicating that the immediately preceding operation was an input.
Dynamic Multi-Level Cache Including Resource Access Fairness Scheme
Ekaterina M. Ambroladze - Wappingers Falls NY, US Deanna Postles Dunn Berger - Poughkeepsie NY, US Diana Lynn Orf - Somerville MA, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 13/18
US Classification:
710240, 710243
Abstract:
An apparatus for controlling access to a resource includes a shared pipeline configured to communicate with the resource, a plurality of command queues configured to form instructions for the shared pipeline and an arbiter coupled between the shared pipeline and the plurality of command queues configured to grant access to the shared pipeline to a one of the plurality of command queues based on a first priority scheme in a first operating mode. The apparatus also includes interface logic coupled to the arbiter and configured to determine that contention for access to the resource exists among the plurality of command queues and to cause the arbiter to grant access to the shared pipeline based on a second priority scheme in second operating mode.
Deanna P. Berger - Poughkeepsie NY, US Michael F. Fee - Cold Spring NY, US Christine C. Jones - Poughkeepsie NY, US Diana L. Orf - Somerville MA, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 12/14
US Classification:
711130, 711145, 711152
Abstract:
Various embodiments of the present invention manage a hierarchical store-through memory cache structure. A store request queue is associated with a processing core in multiple processing cores. At least one blocking condition is determined to have occurred at the store request queue. Multiple non-store requests and a set of store requests associated with a remaining set of processing cores in the multiple processing cores are dynamically blocked from accessing a memory cache in response to the blocking condition having occurred.
Deanna P. Berger - Poughkeepsie NY, US Michael F. Fee - Cold Spring NY, US Christine C. Jones - Poughkeepsie NY, US Diana L. Orf - Somerville MA, US
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
G06F 12/12
US Classification:
711133, 711118
Abstract:
Various embodiments of the present invention merge data in a cache memory. In one embodiment a set of store data is received from a processing core. A store merge command and a merge mask from are also received from the processing core. A portion of the store data to perform a merging operation thereon is identified based on the store merge command. A sub-portion of the portion of the store data to be merged with a corresponding set of data from a cache memory is identified based on the merge mask. The sub-portion is merged with the corresponding set of data from the cache memory.
Deanna Postles Dunn Berger - S Hyde Park NY, US Ekaterina M. Ambroladze - Wappingers Falls NY, US Michael Fee - Cold Spring NY, US Diana Lynn Orf - Somerville MA, US
Assignee:
International Business Machines Corporation - Armonk NY
A method that includes providing LRU selection logic which controllably pass requests for access to computer system resources to a shared resource via a first level and a second level, determining whether a request in a request group is active, presenting the request to LRU selection logic at the first level, when it is determined that the request is active, determining whether the request is a LRU request of the request group at the first level, forwarding the request to the second level when it is determined that the request is the LRU request of the request group, comparing the request to an LRU request from each of the request groups at the second level to determine whether the request is a LRU request of the plurality of request groups, and selecting the LRU request of the plurality of request groups to access the shared resource.
Handling Corrupted Background Data In An Out Of Order Execution Environment
Michael Fee - Cold Spring NY, US Christian Habermann - Stuttgart, DE Christian Jacobi - Schoenaich, DE Diana L. Orf - Somerville MA, US Martin Recktenwald - Steinenbronn, DE Hans-Werner Tast - Schoenbuch, DE Ralf Winkelmann - Holzgerlingen, DE
Assignee:
International Business Machines Corporation - Armonk NY
International Classification:
H03M 13/00
US Classification:
714752, 714758, 714763, 711100, 711113, 711118
Abstract:
Handling corrupted background data in an out of order processing environment. Modified data is stored on a byte of a word having at least one byte of background data. A byte valid vector and a byte store bit are added to the word. Parity checking is done on the word. If the word does not contain corrupted background date, the word is propagated to the next level of cache. If the word contains corrupted background data, a copy of the word is fetched from a next level of cache that is ECC protected, the byte having the modified data is extracted from the word and swapped for the corresponding byte in the word copy. The word copy is then written into the next level of cache that is ECC protected.