Razak Hossain - San Diego CA Lun Bin Huang - San Diego CA
Assignee:
STMicroelectronics, Inc. - Carrollton TX
International Classification:
G06F 750
US Classification:
708671, 708709
Abstract:
A computing system includes a plurality of full adders that each receives a bit-wise inversion of a bit of a first data, a bit of a second data, and a bit of a third data, respectively, and provides a sum output and a carry output. An exclusive-OR logic module receives the sum output of a first of the plurality of full adders and a carry output of a second of the plurality of full adders and provides an exclusive-OR output. An AND logic module has a plurality of inputs and an AND output, wherein the exclusive-OR output is electrically connected to one of the plurality of inputs of the AND logic module, and the AND output provides a signal that indicates whether the first data equals the sum of the second data and third data.
Method For Increasing Average Storage Capacity In A Bit-Mapped Tree-Based Storage Engine By Using Remappable Prefix Representations And A Run-Length Encoding Scheme That Defines Multi-Length Fields To Compactly Store Ip Prefixes
Nicholas Julian Richardson - San Diego CA, US Suresh Rajgopal - San Diego CA, US Lun Bin Huang - San Diego CA, US
Assignee:
STMicroelectronics, Inc. - Carrollton TX
International Classification:
G06F 12/00 G06F 17/30
US Classification:
707100, 707 3, 707200
Abstract:
Sparsely distributed prefixes within a bitmapped multi-bit trie are compressed by one or more of: replacing a single entry table string terminating with a single prefix end node with a parent table entry explicitly encoding a prefix portion; replacing a table with only two end nodes or only an end node and an internal node with a single parent table entry explicitly encoding prefix portions; replacing two end nodes with a single compressed child entry at a table location normally occupied by an internal node and explicitly encoding prefix portions; and replacing a plurality of end nodes with a prefix-only entry located at the table end explicitly encoding portions of a plurality of prefixes. The compressed child entry and the prefix-only entry, if present, are read by default each time the table is searched. Run length encoding allows variable length prefix portions to be encoded.
Method For Increasing Storage Capacity In A Multi-Bit Trie-Based Hardware Storage Engine By Compressing The Representation Of Single-Length Prefixes
Nicholas Julian Richardson - San Diego CA, US Suresh Rajgopal - San Diego CA, US Lun Bin Huang - San Diego CA, US
Assignee:
STMicroelectronics, Inc. - Carrollton TX
International Classification:
G06F 17/00 G06F 7/00
US Classification:
707101, 707 3, 707 4
Abstract:
Prefixes terminating with end node entries each containing identical length prefix portions in a single child table are compressed by replacing the end node entries with one or more compressed single length (CSL) prefix entries in the child table that contain a bitmap for the prefix portions for the end node entries. A different type parent table trie node entry is created for the child table. Where the prefix portions are of non-zero length, the parent table contains a bitmap indexing the end node entries. Where the prefix portions are of length zero, the parent table may optionally contain a bitmap for the prefix portions, serving as an end node. The number of prefix portions consolidated within the CSL node entry is based upon the prefix portion length.
System And Method For Handling Register Dependency In A Stack-Based Pipelined Processor
Nicholas J. Richardson - San Diego CA, US Lun Bin Huang - San Diego CA, US
Assignee:
STMicroelectronics, Inc. - Carrollton TX
International Classification:
G06F 9/30
US Classification:
712217, 712202
Abstract:
There is disclosed a data processor comprising 1) a register stack comprising a plurality of architectural registers that stores operands required by instructions executed by the data processor; 2) an instruction execution pipeline comprising N processing stages, where each processing stage performs one of a plurality of execution steps associated with a pending instruction being executed by the instruction execution pipeline; and 3) at least one mapping register associated with at least one of the N processing stages, wherein the at least one mapping register stores mapping data that may be used to determine a physical register associated with an architectural stack register accessed by the pending instruction.
System And Method For Path Compression Optimization In A Pipelined Hardware Bitmapped Multi-Bit Trie Algorithmic Network Search Engine
Lun Bin Huang - San Diego CA, US Nicholas Julian Richardson - San Diego CA, US Suresh Rajgopal - San Diego CA, US
Assignee:
STMicroelectronics, Inc. - Carrollton TX
International Classification:
H04L 12/56
US Classification:
370392, 370474, 370475
Abstract:
For use in a pipeline network search engine of a router, a path compression optimization system and method is disclosed for eliminating single entry trie tables. The system embeds in a parent trie table (1) path compression patterns that comprise common prefix bits of a data packet and (2) skip counts that indicate the length of the path compression patterns. The network search engine utilizes the path compression patterns and the skip counts to eliminate single entry trie tables from a data structure. Each path compression pattern is processed one stride at a time in subsequent pipeline stages of the network search engine. The elimination of unnecessary single entry trie tables reduces memory space, power consumption, and the number of memory accesses that are necessary to traverse the data structure.
Apparatus And Method Of Using Fully Configurable Memory, Multi-Stage Pipeline Logic And An Embedded Processor To Implement Multi-Bit Trie Algorithmic Network Search Engine
Lun Bin Huang - San Diego CA, US Suresh Rajgopal - San Diego CA, US Nicholas Julian Richardson - San Diego CA, US
Assignee:
STMicroelectronics, Inc. - Carrollton TX
International Classification:
H04L 12/28 G06F 7/00 G06F 9/26
US Classification:
370392, 370389, 370401, 707705, 711216
Abstract:
A multi-bit trie network search engine is implemented by a number of pipeline logic units corresponding to the number of longest-prefix strides and a set of memory blocks for holding prefix tables. Each pipeline logic unit is limited to one memory access, and the termination point within the pipeline logic unit chain is variable to handle different length prefixes. The memory blocks are coupled to the pipeline logic units with a meshed crossbar and form a set of virtual memory banks, where memory blocks within any given physical memory bank may be allocated to a virtual memory bank for any particular pipeline logic unit. An embedded programmable processor manages route insertion and deletion in the prefix tables, together with configuration of the virtual memory banks.
Mechanism To Reduce Lookup Latency In A Pipelined Hardware Implementation Of A Trie-Based Ip Lookup Algorithm
Suresh Rajgopal - San Diego CA, US Lun Bin Huang - San Diego CA, US Nicholas Julian Richardson - San Diego CA, US
Assignee:
STMicroelectronics, Inc. - Coppell TX
International Classification:
H04L 12/28
US Classification:
370392, 370465
Abstract:
A series of hardware pipeline units each processing a stride during prefix search operations on a multi-bit trie includes, within at least one pipeline unit other than the last pipeline unit, a mechanism for retiring search results from the respective pipeline unit rather than passing the search results through the remaining pipeline units. Early retirement may be triggered by either the absence of subsequent strides to be processed or completion (a miss or end node match) of the search, together with an absence of active search operations in subsequent pipeline units. The early retirement mechanism may be included in those pipeline units corresponding to a last stride for a maximum prefix length shorter than the pipeline (e. g. , 20 or 32 bits rather than 64 bits), in pipeline units selected on some other basis, or in every pipeline unit. Worst-case and/or average latency for prefix search operations is reduced.
Apparatus And Method For Determining A Read Level Of A Flash Memory After An Inactive Period Of Time
Aldo G. Cometti - San Diego CA, US Lun Bin Huang - San Diego CA, US Ashot Melik-Martirosian - San Diego CA, US
Assignee:
STEC, Inc. - Santa Ana CA
International Classification:
G11C 7/00
US Classification:
365201, 36518518, 365226
Abstract:
Disclosed is an apparatus and method for determining a dwell time in a non-volatile memory circuit after a shutdown of the memory circuit. A voltage shift is calculated by comparing a first read level voltage required to read a test block stored before the shutdown and a second read level voltage required to read a second test block stored after the shutdown. A shutdown time is determined from a look up table indexed by the voltage shift and a number of program/erase cycles. The dwell time is calculated as a function of the drive temperature, a clock, and a block time stamp. Once the dwell time is calculated, a controller calculates a new read level voltage based, in part, on the dwell time and provides one or more programming commands representative of the new read level voltage to the memory circuit to read the memory circuit.
Broadcom
System Test Engineer
Broadcom May 2012 - Nov 2012
Intern - System Design
Illinois Institute of Technology 2007 - 2011
Teaching and Research Assistant
Coolsand Technologies Mar 2005 - Aug 2007
Algorithm Engineer
Education:
Illinois Institute of Technology 2007 - 2013
Doctorates, Doctor of Philosophy, Electrical Engineering, Philosophy
Beijing University of Posts and Telecommunications 2001 - 2004
Masters, Electrical Engineering
Skills:
Matlab Lte C Ofdm Gsm Embedded Systems Dsp Wireless Wimax Wcdma Testing 3Gpp Perl Rf Mobile Communications Signal Processing Python Umts Algorithms Simulations Programming Bluetooth